00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2082 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3347 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.169 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.171 The recommended git tool is: git 00:00:00.172 using credential 00000000-0000-0000-0000-000000000002 00:00:00.177 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.201 Fetching changes from the remote Git repository 00:00:00.202 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.263 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.274 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.285 Checking out Revision 3aaeb01851f3410c69bd29d15f29de9bbe186390 (FETCH_HEAD) 00:00:06.285 > git config core.sparsecheckout # timeout=10 00:00:06.296 > git read-tree -mu HEAD # timeout=10 00:00:06.311 > git checkout -f 3aaeb01851f3410c69bd29d15f29de9bbe186390 # timeout=5 00:00:06.332 Commit message: "jenkins/autotest: use known issue detector function from shm lib" 00:00:06.332 > git rev-list --no-walk 3aaeb01851f3410c69bd29d15f29de9bbe186390 # timeout=10 00:00:06.441 [Pipeline] Start of Pipeline 00:00:06.451 [Pipeline] library 00:00:06.452 Loading library shm_lib@master 00:00:06.453 Library shm_lib@master is cached. Copying from home. 00:00:06.464 [Pipeline] node 00:00:06.473 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.474 [Pipeline] { 00:00:06.482 [Pipeline] catchError 00:00:06.483 [Pipeline] { 00:00:06.494 [Pipeline] wrap 00:00:06.502 [Pipeline] { 00:00:06.508 [Pipeline] stage 00:00:06.509 [Pipeline] { (Prologue) 00:00:06.523 [Pipeline] echo 00:00:06.524 Node: VM-host-SM16 00:00:06.528 [Pipeline] cleanWs 00:00:06.535 [WS-CLEANUP] Deleting project workspace... 00:00:06.535 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.540 [WS-CLEANUP] done 00:00:06.733 [Pipeline] setCustomBuildProperty 00:00:06.820 [Pipeline] httpRequest 00:00:06.835 [Pipeline] echo 00:00:06.836 Sorcerer 10.211.164.101 is alive 00:00:06.844 [Pipeline] retry 00:00:06.845 [Pipeline] { 00:00:06.856 [Pipeline] httpRequest 00:00:06.860 HttpMethod: GET 00:00:06.861 URL: http://10.211.164.101/packages/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:06.861 Sending request to url: http://10.211.164.101/packages/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:06.875 Response Code: HTTP/1.1 200 OK 00:00:06.876 Success: Status code 200 is in the accepted range: 200,404 00:00:06.876 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:09.425 [Pipeline] } 00:00:09.442 [Pipeline] // retry 00:00:09.450 [Pipeline] sh 00:00:09.731 + tar --no-same-owner -xf jbp_3aaeb01851f3410c69bd29d15f29de9bbe186390.tar.gz 00:00:09.746 [Pipeline] httpRequest 00:00:09.778 [Pipeline] echo 00:00:09.780 Sorcerer 10.211.164.101 is alive 00:00:09.789 [Pipeline] retry 00:00:09.792 [Pipeline] { 00:00:09.806 [Pipeline] httpRequest 00:00:09.810 HttpMethod: GET 00:00:09.811 URL: http://10.211.164.101/packages/spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:00:09.811 Sending request to url: http://10.211.164.101/packages/spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:00:09.825 Response Code: HTTP/1.1 200 OK 00:00:09.825 Success: Status code 200 is in the accepted range: 200,404 00:00:09.826 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:01:09.895 [Pipeline] } 00:01:09.912 [Pipeline] // retry 00:01:09.918 [Pipeline] sh 00:01:10.197 + tar --no-same-owner -xf spdk_227b8322cef040b9932bd4a19ce8c0db4cd734f8.tar.gz 00:01:12.740 [Pipeline] sh 00:01:13.045 + git -C spdk log --oneline -n5 00:01:13.045 227b8322c module/sock: free addr info before return 00:01:13.045 29119cdfb nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:01:13.045 c7d225385 nvmf/tcp: replace pending_buf_queue with nvmf_tcp_request_get_buffers 00:01:13.045 18ede8d38 nvmf: enable iobuf based queuing for nvmf requests 00:01:13.045 a48eba161 nvmf: change order of functions in the transport.c file 00:01:13.062 [Pipeline] withCredentials 00:01:13.072 > git --version # timeout=10 00:01:13.084 > git --version # 'git version 2.39.2' 00:01:13.098 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:13.101 [Pipeline] { 00:01:13.110 [Pipeline] retry 00:01:13.112 [Pipeline] { 00:01:13.126 [Pipeline] sh 00:01:13.405 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:13.674 [Pipeline] } 00:01:13.693 [Pipeline] // retry 00:01:13.699 [Pipeline] } 00:01:13.716 [Pipeline] // withCredentials 00:01:13.726 [Pipeline] httpRequest 00:01:13.743 [Pipeline] echo 00:01:13.744 Sorcerer 10.211.164.101 is alive 00:01:13.755 [Pipeline] retry 00:01:13.757 [Pipeline] { 00:01:13.772 [Pipeline] httpRequest 00:01:13.777 HttpMethod: GET 00:01:13.777 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:13.778 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:13.778 Response Code: HTTP/1.1 200 OK 00:01:13.779 Success: Status code 200 is in the accepted range: 200,404 00:01:13.779 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.810 [Pipeline] } 00:01:20.827 [Pipeline] // retry 00:01:20.836 [Pipeline] sh 00:01:21.116 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:22.505 [Pipeline] sh 00:01:22.784 + git -C dpdk log --oneline -n5 00:01:22.784 caf0f5d395 version: 22.11.4 00:01:22.784 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:22.784 dc9c799c7d vhost: fix missing spinlock unlock 00:01:22.784 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:22.784 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:22.800 [Pipeline] writeFile 00:01:22.813 [Pipeline] sh 00:01:23.091 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:23.103 [Pipeline] sh 00:01:23.383 + cat autorun-spdk.conf 00:01:23.383 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.383 SPDK_TEST_NVMF=1 00:01:23.383 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.383 SPDK_TEST_URING=1 00:01:23.383 SPDK_TEST_USDT=1 00:01:23.383 SPDK_RUN_UBSAN=1 00:01:23.383 NET_TYPE=virt 00:01:23.383 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.383 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:23.383 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.390 RUN_NIGHTLY=1 00:01:23.392 [Pipeline] } 00:01:23.405 [Pipeline] // stage 00:01:23.420 [Pipeline] stage 00:01:23.422 [Pipeline] { (Run VM) 00:01:23.436 [Pipeline] sh 00:01:23.716 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:23.716 + echo 'Start stage prepare_nvme.sh' 00:01:23.716 Start stage prepare_nvme.sh 00:01:23.716 + [[ -n 4 ]] 00:01:23.716 + disk_prefix=ex4 00:01:23.716 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:23.716 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:23.716 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:23.716 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.716 ++ SPDK_TEST_NVMF=1 00:01:23.716 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.716 ++ SPDK_TEST_URING=1 00:01:23.716 ++ SPDK_TEST_USDT=1 00:01:23.716 ++ SPDK_RUN_UBSAN=1 00:01:23.716 ++ NET_TYPE=virt 00:01:23.716 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.716 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:23.716 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.716 ++ RUN_NIGHTLY=1 00:01:23.716 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:23.716 + nvme_files=() 00:01:23.716 + declare -A nvme_files 00:01:23.716 + backend_dir=/var/lib/libvirt/images/backends 00:01:23.716 + nvme_files['nvme.img']=5G 00:01:23.716 + nvme_files['nvme-cmb.img']=5G 00:01:23.716 + nvme_files['nvme-multi0.img']=4G 00:01:23.716 + nvme_files['nvme-multi1.img']=4G 00:01:23.716 + nvme_files['nvme-multi2.img']=4G 00:01:23.716 + nvme_files['nvme-openstack.img']=8G 00:01:23.716 + nvme_files['nvme-zns.img']=5G 00:01:23.716 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:23.716 + (( SPDK_TEST_FTL == 1 )) 00:01:23.716 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:23.716 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:23.716 + for nvme in "${!nvme_files[@]}" 00:01:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.716 + for nvme in "${!nvme_files[@]}" 00:01:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.716 + for nvme in "${!nvme_files[@]}" 00:01:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:23.716 + for nvme in "${!nvme_files[@]}" 00:01:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.716 + for nvme in "${!nvme_files[@]}" 00:01:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.716 + for nvme in "${!nvme_files[@]}" 00:01:23.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:23.716 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.717 + for nvme in "${!nvme_files[@]}" 00:01:23.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:24.284 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.284 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:24.284 + echo 'End stage prepare_nvme.sh' 00:01:24.284 End stage prepare_nvme.sh 00:01:24.295 [Pipeline] sh 00:01:24.574 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:24.574 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:24.574 00:01:24.574 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:24.574 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:24.574 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:24.574 HELP=0 00:01:24.574 DRY_RUN=0 00:01:24.574 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:24.574 NVME_DISKS_TYPE=nvme,nvme, 00:01:24.574 NVME_AUTO_CREATE=0 00:01:24.574 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:24.574 NVME_CMB=,, 00:01:24.574 NVME_PMR=,, 00:01:24.574 NVME_ZNS=,, 00:01:24.574 NVME_MS=,, 00:01:24.574 NVME_FDP=,, 00:01:24.574 SPDK_VAGRANT_DISTRO=fedora39 00:01:24.574 SPDK_VAGRANT_VMCPU=10 00:01:24.574 SPDK_VAGRANT_VMRAM=12288 00:01:24.574 SPDK_VAGRANT_PROVIDER=libvirt 00:01:24.574 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:24.574 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:24.574 SPDK_OPENSTACK_NETWORK=0 00:01:24.574 VAGRANT_PACKAGE_BOX=0 00:01:24.575 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:24.575 FORCE_DISTRO=true 00:01:24.575 VAGRANT_BOX_VERSION= 00:01:24.575 EXTRA_VAGRANTFILES= 00:01:24.575 NIC_MODEL=e1000 00:01:24.575 00:01:24.575 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:24.575 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:27.860 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.118 ==> default: Creating image (snapshot of base box volume). 00:01:28.377 ==> default: Creating domain with the following settings... 00:01:28.377 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1723408958_35a5dbbb432a7bed8b9b 00:01:28.377 ==> default: -- Domain type: kvm 00:01:28.377 ==> default: -- Cpus: 10 00:01:28.377 ==> default: -- Feature: acpi 00:01:28.377 ==> default: -- Feature: apic 00:01:28.377 ==> default: -- Feature: pae 00:01:28.377 ==> default: -- Memory: 12288M 00:01:28.377 ==> default: -- Memory Backing: hugepages: 00:01:28.377 ==> default: -- Management MAC: 00:01:28.377 ==> default: -- Loader: 00:01:28.377 ==> default: -- Nvram: 00:01:28.377 ==> default: -- Base box: spdk/fedora39 00:01:28.377 ==> default: -- Storage pool: default 00:01:28.377 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1723408958_35a5dbbb432a7bed8b9b.img (20G) 00:01:28.377 ==> default: -- Volume Cache: default 00:01:28.377 ==> default: -- Kernel: 00:01:28.377 ==> default: -- Initrd: 00:01:28.377 ==> default: -- Graphics Type: vnc 00:01:28.377 ==> default: -- Graphics Port: -1 00:01:28.377 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.377 ==> default: -- Graphics Password: Not defined 00:01:28.377 ==> default: -- Video Type: cirrus 00:01:28.377 ==> default: -- Video VRAM: 9216 00:01:28.377 ==> default: -- Sound Type: 00:01:28.377 ==> default: -- Keymap: en-us 00:01:28.377 ==> default: -- TPM Path: 00:01:28.377 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.377 ==> default: -- Command line args: 00:01:28.377 ==> default: -> value=-device, 00:01:28.377 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:28.377 ==> default: -> value=-drive, 00:01:28.377 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:28.377 ==> default: -> value=-device, 00:01:28.377 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.377 ==> default: -> value=-device, 00:01:28.377 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:28.377 ==> default: -> value=-drive, 00:01:28.377 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:28.377 ==> default: -> value=-device, 00:01:28.377 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.377 ==> default: -> value=-drive, 00:01:28.377 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:28.377 ==> default: -> value=-device, 00:01:28.377 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.377 ==> default: -> value=-drive, 00:01:28.377 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:28.377 ==> default: -> value=-device, 00:01:28.377 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.378 ==> default: Creating shared folders metadata... 00:01:28.378 ==> default: Starting domain. 00:01:30.278 ==> default: Waiting for domain to get an IP address... 00:01:48.361 ==> default: Waiting for SSH to become available... 00:01:48.361 ==> default: Configuring and enabling network interfaces... 00:01:51.649 default: SSH address: 192.168.121.161:22 00:01:51.649 default: SSH username: vagrant 00:01:51.649 default: SSH auth method: private key 00:01:53.575 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.717 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:06.985 ==> default: Mounting SSHFS shared folder... 00:02:07.921 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:07.921 ==> default: Checking Mount.. 00:02:09.296 ==> default: Folder Successfully Mounted! 00:02:09.296 ==> default: Running provisioner: file... 00:02:09.863 default: ~/.gitconfig => .gitconfig 00:02:10.430 00:02:10.430 SUCCESS! 00:02:10.430 00:02:10.430 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:10.430 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:10.430 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:10.430 00:02:10.439 [Pipeline] } 00:02:10.453 [Pipeline] // stage 00:02:10.461 [Pipeline] dir 00:02:10.462 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:10.463 [Pipeline] { 00:02:10.475 [Pipeline] catchError 00:02:10.477 [Pipeline] { 00:02:10.489 [Pipeline] sh 00:02:10.767 + vagrant ssh-config --host vagrant 00:02:10.767 + sed -ne /^Host/,$p 00:02:10.767 + tee ssh_conf 00:02:14.068 Host vagrant 00:02:14.068 HostName 192.168.121.161 00:02:14.068 User vagrant 00:02:14.068 Port 22 00:02:14.068 UserKnownHostsFile /dev/null 00:02:14.068 StrictHostKeyChecking no 00:02:14.068 PasswordAuthentication no 00:02:14.068 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:14.068 IdentitiesOnly yes 00:02:14.068 LogLevel FATAL 00:02:14.068 ForwardAgent yes 00:02:14.068 ForwardX11 yes 00:02:14.068 00:02:14.081 [Pipeline] withEnv 00:02:14.083 [Pipeline] { 00:02:14.096 [Pipeline] sh 00:02:14.375 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:14.375 source /etc/os-release 00:02:14.375 [[ -e /image.version ]] && img=$(< /image.version) 00:02:14.375 # Minimal, systemd-like check. 00:02:14.375 if [[ -e /.dockerenv ]]; then 00:02:14.375 # Clear garbage from the node's name: 00:02:14.375 # agt-er_autotest_547-896 -> autotest_547-896 00:02:14.375 # $HOSTNAME is the actual container id 00:02:14.375 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:14.375 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:14.375 # We can assume this is a mount from a host where container is running, 00:02:14.375 # so fetch its hostname to easily identify the target swarm worker. 00:02:14.375 container="$(< /etc/hostname) ($agent)" 00:02:14.375 else 00:02:14.375 # Fallback 00:02:14.375 container=$agent 00:02:14.375 fi 00:02:14.375 fi 00:02:14.375 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:14.375 00:02:14.645 [Pipeline] } 00:02:14.661 [Pipeline] // withEnv 00:02:14.670 [Pipeline] setCustomBuildProperty 00:02:14.684 [Pipeline] stage 00:02:14.686 [Pipeline] { (Tests) 00:02:14.700 [Pipeline] sh 00:02:14.981 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:15.253 [Pipeline] sh 00:02:15.534 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:15.808 [Pipeline] timeout 00:02:15.808 Timeout set to expire in 40 min 00:02:15.810 [Pipeline] { 00:02:15.826 [Pipeline] sh 00:02:16.108 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:16.676 HEAD is now at 227b8322c module/sock: free addr info before return 00:02:16.689 [Pipeline] sh 00:02:16.970 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:17.244 [Pipeline] sh 00:02:17.524 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:17.801 [Pipeline] sh 00:02:18.081 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:18.341 ++ readlink -f spdk_repo 00:02:18.341 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:18.341 + [[ -n /home/vagrant/spdk_repo ]] 00:02:18.341 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:18.341 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:18.341 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:18.341 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:18.341 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:18.341 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:18.341 + cd /home/vagrant/spdk_repo 00:02:18.341 + source /etc/os-release 00:02:18.341 ++ NAME='Fedora Linux' 00:02:18.341 ++ VERSION='39 (Cloud Edition)' 00:02:18.341 ++ ID=fedora 00:02:18.341 ++ VERSION_ID=39 00:02:18.341 ++ VERSION_CODENAME= 00:02:18.341 ++ PLATFORM_ID=platform:f39 00:02:18.341 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:18.341 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.341 ++ LOGO=fedora-logo-icon 00:02:18.341 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:18.341 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.341 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:18.341 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.341 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.341 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.341 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:18.341 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.341 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:18.341 ++ SUPPORT_END=2024-11-12 00:02:18.341 ++ VARIANT='Cloud Edition' 00:02:18.341 ++ VARIANT_ID=cloud 00:02:18.341 + uname -a 00:02:18.341 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:18.341 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:18.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:18.865 Hugepages 00:02:18.865 node hugesize free / total 00:02:18.865 node0 1048576kB 0 / 0 00:02:18.865 node0 2048kB 0 / 0 00:02:18.865 00:02:18.865 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:18.865 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:18.865 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:18.865 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:18.865 + rm -f /tmp/spdk-ld-path 00:02:18.865 + source autorun-spdk.conf 00:02:18.865 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.865 ++ SPDK_TEST_NVMF=1 00:02:18.865 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.865 ++ SPDK_TEST_URING=1 00:02:18.865 ++ SPDK_TEST_USDT=1 00:02:18.865 ++ SPDK_RUN_UBSAN=1 00:02:18.865 ++ NET_TYPE=virt 00:02:18.865 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:18.865 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:18.865 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:18.865 ++ RUN_NIGHTLY=1 00:02:18.865 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:18.865 + [[ -n '' ]] 00:02:18.865 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:18.865 + for M in /var/spdk/build-*-manifest.txt 00:02:18.865 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:18.865 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:18.865 + for M in /var/spdk/build-*-manifest.txt 00:02:18.865 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:18.865 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:18.865 + for M in /var/spdk/build-*-manifest.txt 00:02:18.865 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:18.865 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:18.865 ++ uname 00:02:18.865 + [[ Linux == \L\i\n\u\x ]] 00:02:18.865 + sudo dmesg -T 00:02:18.865 + sudo dmesg --clear 00:02:18.865 + dmesg_pid=6100 00:02:18.865 + sudo dmesg -Tw 00:02:18.865 + [[ Fedora Linux == FreeBSD ]] 00:02:18.865 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.865 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:18.865 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:18.865 + [[ -x /usr/src/fio-static/fio ]] 00:02:18.865 + export FIO_BIN=/usr/src/fio-static/fio 00:02:18.865 + FIO_BIN=/usr/src/fio-static/fio 00:02:18.865 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:18.865 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:18.865 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:18.865 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.865 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:18.865 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:18.865 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.865 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:18.865 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:18.865 Test configuration: 00:02:18.865 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.865 SPDK_TEST_NVMF=1 00:02:18.865 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.865 SPDK_TEST_URING=1 00:02:18.865 SPDK_TEST_USDT=1 00:02:18.865 SPDK_RUN_UBSAN=1 00:02:18.865 NET_TYPE=virt 00:02:18.865 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:18.865 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:18.865 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:19.125 RUN_NIGHTLY=1 20:43:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:19.125 20:43:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:19.125 20:43:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.125 20:43:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.125 20:43:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.125 20:43:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.125 20:43:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.125 20:43:29 -- paths/export.sh@5 -- $ export PATH 00:02:19.125 20:43:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.125 20:43:29 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:19.125 20:43:29 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:19.125 20:43:29 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723409009.XXXXXX 00:02:19.125 20:43:29 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723409009.09oRZz 00:02:19.125 20:43:29 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:19.125 20:43:29 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:02:19.125 20:43:29 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:19.125 20:43:29 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:19.125 20:43:29 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:19.125 20:43:29 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:19.125 20:43:29 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:19.125 20:43:29 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:02:19.125 20:43:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.125 20:43:29 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:19.125 20:43:29 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:19.125 20:43:29 -- pm/common@17 -- $ local monitor 00:02:19.125 20:43:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.125 20:43:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:19.125 20:43:29 -- pm/common@21 -- $ date +%s 00:02:19.125 20:43:29 -- pm/common@25 -- $ sleep 1 00:02:19.125 20:43:29 -- pm/common@21 -- $ date +%s 00:02:19.125 20:43:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723409009 00:02:19.125 20:43:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1723409009 00:02:19.125 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723409009_collect-vmstat.pm.log 00:02:19.125 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1723409009_collect-cpu-load.pm.log 00:02:20.063 20:43:30 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:20.063 20:43:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.063 20:43:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.063 20:43:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:20.063 20:43:30 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.063 Sun Aug 11 08:43:30 PM UTC 2024 00:02:20.063 20:43:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.063 v24.09-pre-396-g227b8322c 00:02:20.063 20:43:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:20.063 20:43:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:20.063 20:43:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:20.063 20:43:30 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:20.063 20:43:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:20.063 20:43:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.063 ************************************ 00:02:20.063 START TEST ubsan 00:02:20.063 ************************************ 00:02:20.063 using ubsan 00:02:20.063 20:43:30 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:20.063 00:02:20.063 real 0m0.000s 00:02:20.063 user 0m0.000s 00:02:20.063 sys 0m0.000s 00:02:20.063 20:43:30 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:20.063 20:43:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:20.063 ************************************ 00:02:20.063 END TEST ubsan 00:02:20.063 ************************************ 00:02:20.063 20:43:30 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:20.063 20:43:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:20.063 20:43:30 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:20.063 20:43:30 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:20.063 20:43:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:20.063 20:43:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.323 ************************************ 00:02:20.323 START TEST build_native_dpdk 00:02:20.323 ************************************ 00:02:20.323 20:43:30 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:20.323 caf0f5d395 version: 22.11.4 00:02:20.323 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:20.323 dc9c799c7d vhost: fix missing spinlock unlock 00:02:20.323 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:20.323 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:20.323 20:43:30 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:20.323 20:43:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:20.324 patching file config/rte_config.h 00:02:20.324 Hunk #1 succeeded at 60 (offset 1 line). 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:02:20.324 20:43:30 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:20.324 patching file lib/pcapng/rte_pcapng.c 00:02:20.324 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:20.324 20:43:30 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:25.594 The Meson build system 00:02:25.594 Version: 1.5.0 00:02:25.594 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:25.594 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:25.594 Build type: native build 00:02:25.594 Program cat found: YES (/usr/bin/cat) 00:02:25.594 Project name: DPDK 00:02:25.594 Project version: 22.11.4 00:02:25.594 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.594 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:25.594 Host machine cpu family: x86_64 00:02:25.594 Host machine cpu: x86_64 00:02:25.594 Message: ## Building in Developer Mode ## 00:02:25.594 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:25.594 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:25.594 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:25.594 Program objdump found: YES (/usr/bin/objdump) 00:02:25.594 Program python3 found: YES (/usr/bin/python3) 00:02:25.594 Program cat found: YES (/usr/bin/cat) 00:02:25.594 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:25.594 Checking for size of "void *" : 8 00:02:25.594 Checking for size of "void *" : 8 (cached) 00:02:25.594 Library m found: YES 00:02:25.594 Library numa found: YES 00:02:25.594 Has header "numaif.h" : YES 00:02:25.594 Library fdt found: NO 00:02:25.594 Library execinfo found: NO 00:02:25.594 Has header "execinfo.h" : YES 00:02:25.594 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.594 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:25.594 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:25.594 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:25.594 Run-time dependency openssl found: YES 3.1.1 00:02:25.594 Run-time dependency libpcap found: YES 1.10.4 00:02:25.594 Has header "pcap.h" with dependency libpcap: YES 00:02:25.594 Compiler for C supports arguments -Wcast-qual: YES 00:02:25.594 Compiler for C supports arguments -Wdeprecated: YES 00:02:25.594 Compiler for C supports arguments -Wformat: YES 00:02:25.594 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:25.594 Compiler for C supports arguments -Wformat-security: NO 00:02:25.594 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:25.594 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:25.594 Compiler for C supports arguments -Wnested-externs: YES 00:02:25.594 Compiler for C supports arguments -Wold-style-definition: YES 00:02:25.594 Compiler for C supports arguments -Wpointer-arith: YES 00:02:25.594 Compiler for C supports arguments -Wsign-compare: YES 00:02:25.594 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:25.594 Compiler for C supports arguments -Wundef: YES 00:02:25.594 Compiler for C supports arguments -Wwrite-strings: YES 00:02:25.594 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:25.594 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:25.594 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:25.594 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:25.594 Compiler for C supports arguments -mavx512f: YES 00:02:25.594 Checking if "AVX512 checking" compiles: YES 00:02:25.594 Fetching value of define "__SSE4_2__" : 1 00:02:25.594 Fetching value of define "__AES__" : 1 00:02:25.594 Fetching value of define "__AVX__" : 1 00:02:25.594 Fetching value of define "__AVX2__" : 1 00:02:25.594 Fetching value of define "__AVX512BW__" : (undefined) 00:02:25.594 Fetching value of define "__AVX512CD__" : (undefined) 00:02:25.594 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:25.594 Fetching value of define "__AVX512F__" : (undefined) 00:02:25.594 Fetching value of define "__AVX512VL__" : (undefined) 00:02:25.594 Fetching value of define "__PCLMUL__" : 1 00:02:25.594 Fetching value of define "__RDRND__" : 1 00:02:25.594 Fetching value of define "__RDSEED__" : 1 00:02:25.594 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:25.594 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:25.594 Message: lib/kvargs: Defining dependency "kvargs" 00:02:25.594 Message: lib/telemetry: Defining dependency "telemetry" 00:02:25.594 Checking for function "getentropy" : YES 00:02:25.594 Message: lib/eal: Defining dependency "eal" 00:02:25.594 Message: lib/ring: Defining dependency "ring" 00:02:25.594 Message: lib/rcu: Defining dependency "rcu" 00:02:25.594 Message: lib/mempool: Defining dependency "mempool" 00:02:25.594 Message: lib/mbuf: Defining dependency "mbuf" 00:02:25.594 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:25.594 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:25.594 Compiler for C supports arguments -mpclmul: YES 00:02:25.594 Compiler for C supports arguments -maes: YES 00:02:25.594 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:25.594 Compiler for C supports arguments -mavx512bw: YES 00:02:25.594 Compiler for C supports arguments -mavx512dq: YES 00:02:25.594 Compiler for C supports arguments -mavx512vl: YES 00:02:25.594 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:25.594 Compiler for C supports arguments -mavx2: YES 00:02:25.594 Compiler for C supports arguments -mavx: YES 00:02:25.594 Message: lib/net: Defining dependency "net" 00:02:25.594 Message: lib/meter: Defining dependency "meter" 00:02:25.594 Message: lib/ethdev: Defining dependency "ethdev" 00:02:25.594 Message: lib/pci: Defining dependency "pci" 00:02:25.594 Message: lib/cmdline: Defining dependency "cmdline" 00:02:25.594 Message: lib/metrics: Defining dependency "metrics" 00:02:25.594 Message: lib/hash: Defining dependency "hash" 00:02:25.594 Message: lib/timer: Defining dependency "timer" 00:02:25.594 Fetching value of define "__AVX2__" : 1 (cached) 00:02:25.594 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:25.594 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:25.594 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:25.594 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:25.594 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:25.594 Message: lib/acl: Defining dependency "acl" 00:02:25.594 Message: lib/bbdev: Defining dependency "bbdev" 00:02:25.595 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:25.595 Run-time dependency libelf found: YES 0.191 00:02:25.595 Message: lib/bpf: Defining dependency "bpf" 00:02:25.595 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:25.595 Message: lib/compressdev: Defining dependency "compressdev" 00:02:25.595 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:25.595 Message: lib/distributor: Defining dependency "distributor" 00:02:25.595 Message: lib/efd: Defining dependency "efd" 00:02:25.595 Message: lib/eventdev: Defining dependency "eventdev" 00:02:25.595 Message: lib/gpudev: Defining dependency "gpudev" 00:02:25.595 Message: lib/gro: Defining dependency "gro" 00:02:25.595 Message: lib/gso: Defining dependency "gso" 00:02:25.595 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:25.595 Message: lib/jobstats: Defining dependency "jobstats" 00:02:25.595 Message: lib/latencystats: Defining dependency "latencystats" 00:02:25.595 Message: lib/lpm: Defining dependency "lpm" 00:02:25.595 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:25.595 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:25.595 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:25.595 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:25.595 Message: lib/member: Defining dependency "member" 00:02:25.595 Message: lib/pcapng: Defining dependency "pcapng" 00:02:25.595 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:25.595 Message: lib/power: Defining dependency "power" 00:02:25.595 Message: lib/rawdev: Defining dependency "rawdev" 00:02:25.595 Message: lib/regexdev: Defining dependency "regexdev" 00:02:25.595 Message: lib/dmadev: Defining dependency "dmadev" 00:02:25.595 Message: lib/rib: Defining dependency "rib" 00:02:25.595 Message: lib/reorder: Defining dependency "reorder" 00:02:25.595 Message: lib/sched: Defining dependency "sched" 00:02:25.595 Message: lib/security: Defining dependency "security" 00:02:25.595 Message: lib/stack: Defining dependency "stack" 00:02:25.595 Has header "linux/userfaultfd.h" : YES 00:02:25.595 Message: lib/vhost: Defining dependency "vhost" 00:02:25.595 Message: lib/ipsec: Defining dependency "ipsec" 00:02:25.595 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:25.595 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:25.595 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:25.595 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:25.595 Message: lib/fib: Defining dependency "fib" 00:02:25.595 Message: lib/port: Defining dependency "port" 00:02:25.595 Message: lib/pdump: Defining dependency "pdump" 00:02:25.595 Message: lib/table: Defining dependency "table" 00:02:25.595 Message: lib/pipeline: Defining dependency "pipeline" 00:02:25.595 Message: lib/graph: Defining dependency "graph" 00:02:25.595 Message: lib/node: Defining dependency "node" 00:02:25.595 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:25.595 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:25.595 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:25.595 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:25.595 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:25.595 Compiler for C supports arguments -Wno-unused-value: YES 00:02:25.595 Compiler for C supports arguments -Wno-format: YES 00:02:25.595 Compiler for C supports arguments -Wno-format-security: YES 00:02:25.595 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:27.497 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:27.497 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:27.497 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:27.497 Fetching value of define "__AVX2__" : 1 (cached) 00:02:27.497 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.497 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.497 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.497 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:27.497 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:27.497 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:27.497 Configuring doxy-api.conf using configuration 00:02:27.497 Program sphinx-build found: NO 00:02:27.497 Configuring rte_build_config.h using configuration 00:02:27.497 Message: 00:02:27.497 ================= 00:02:27.497 Applications Enabled 00:02:27.497 ================= 00:02:27.497 00:02:27.497 apps: 00:02:27.497 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:27.497 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:27.497 test-security-perf, 00:02:27.497 00:02:27.497 Message: 00:02:27.497 ================= 00:02:27.497 Libraries Enabled 00:02:27.497 ================= 00:02:27.497 00:02:27.497 libs: 00:02:27.497 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:27.498 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:27.498 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:27.498 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:27.498 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:27.498 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:27.498 table, pipeline, graph, node, 00:02:27.498 00:02:27.498 Message: 00:02:27.498 =============== 00:02:27.498 Drivers Enabled 00:02:27.498 =============== 00:02:27.498 00:02:27.498 common: 00:02:27.498 00:02:27.498 bus: 00:02:27.498 pci, vdev, 00:02:27.498 mempool: 00:02:27.498 ring, 00:02:27.498 dma: 00:02:27.498 00:02:27.498 net: 00:02:27.498 i40e, 00:02:27.498 raw: 00:02:27.498 00:02:27.498 crypto: 00:02:27.498 00:02:27.498 compress: 00:02:27.498 00:02:27.498 regex: 00:02:27.498 00:02:27.498 vdpa: 00:02:27.498 00:02:27.498 event: 00:02:27.498 00:02:27.498 baseband: 00:02:27.498 00:02:27.498 gpu: 00:02:27.498 00:02:27.498 00:02:27.498 Message: 00:02:27.498 ================= 00:02:27.498 Content Skipped 00:02:27.498 ================= 00:02:27.498 00:02:27.498 apps: 00:02:27.498 00:02:27.498 libs: 00:02:27.498 kni: explicitly disabled via build config (deprecated lib) 00:02:27.498 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:27.498 00:02:27.498 drivers: 00:02:27.498 common/cpt: not in enabled drivers build config 00:02:27.498 common/dpaax: not in enabled drivers build config 00:02:27.498 common/iavf: not in enabled drivers build config 00:02:27.498 common/idpf: not in enabled drivers build config 00:02:27.498 common/mvep: not in enabled drivers build config 00:02:27.498 common/octeontx: not in enabled drivers build config 00:02:27.498 bus/auxiliary: not in enabled drivers build config 00:02:27.498 bus/dpaa: not in enabled drivers build config 00:02:27.498 bus/fslmc: not in enabled drivers build config 00:02:27.498 bus/ifpga: not in enabled drivers build config 00:02:27.498 bus/vmbus: not in enabled drivers build config 00:02:27.498 common/cnxk: not in enabled drivers build config 00:02:27.498 common/mlx5: not in enabled drivers build config 00:02:27.498 common/qat: not in enabled drivers build config 00:02:27.498 common/sfc_efx: not in enabled drivers build config 00:02:27.498 mempool/bucket: not in enabled drivers build config 00:02:27.498 mempool/cnxk: not in enabled drivers build config 00:02:27.498 mempool/dpaa: not in enabled drivers build config 00:02:27.498 mempool/dpaa2: not in enabled drivers build config 00:02:27.498 mempool/octeontx: not in enabled drivers build config 00:02:27.498 mempool/stack: not in enabled drivers build config 00:02:27.498 dma/cnxk: not in enabled drivers build config 00:02:27.498 dma/dpaa: not in enabled drivers build config 00:02:27.498 dma/dpaa2: not in enabled drivers build config 00:02:27.498 dma/hisilicon: not in enabled drivers build config 00:02:27.498 dma/idxd: not in enabled drivers build config 00:02:27.498 dma/ioat: not in enabled drivers build config 00:02:27.498 dma/skeleton: not in enabled drivers build config 00:02:27.498 net/af_packet: not in enabled drivers build config 00:02:27.498 net/af_xdp: not in enabled drivers build config 00:02:27.498 net/ark: not in enabled drivers build config 00:02:27.498 net/atlantic: not in enabled drivers build config 00:02:27.498 net/avp: not in enabled drivers build config 00:02:27.498 net/axgbe: not in enabled drivers build config 00:02:27.498 net/bnx2x: not in enabled drivers build config 00:02:27.498 net/bnxt: not in enabled drivers build config 00:02:27.498 net/bonding: not in enabled drivers build config 00:02:27.498 net/cnxk: not in enabled drivers build config 00:02:27.498 net/cxgbe: not in enabled drivers build config 00:02:27.498 net/dpaa: not in enabled drivers build config 00:02:27.498 net/dpaa2: not in enabled drivers build config 00:02:27.498 net/e1000: not in enabled drivers build config 00:02:27.498 net/ena: not in enabled drivers build config 00:02:27.498 net/enetc: not in enabled drivers build config 00:02:27.498 net/enetfec: not in enabled drivers build config 00:02:27.498 net/enic: not in enabled drivers build config 00:02:27.498 net/failsafe: not in enabled drivers build config 00:02:27.498 net/fm10k: not in enabled drivers build config 00:02:27.498 net/gve: not in enabled drivers build config 00:02:27.498 net/hinic: not in enabled drivers build config 00:02:27.498 net/hns3: not in enabled drivers build config 00:02:27.498 net/iavf: not in enabled drivers build config 00:02:27.498 net/ice: not in enabled drivers build config 00:02:27.498 net/idpf: not in enabled drivers build config 00:02:27.498 net/igc: not in enabled drivers build config 00:02:27.498 net/ionic: not in enabled drivers build config 00:02:27.498 net/ipn3ke: not in enabled drivers build config 00:02:27.498 net/ixgbe: not in enabled drivers build config 00:02:27.498 net/kni: not in enabled drivers build config 00:02:27.498 net/liquidio: not in enabled drivers build config 00:02:27.498 net/mana: not in enabled drivers build config 00:02:27.498 net/memif: not in enabled drivers build config 00:02:27.498 net/mlx4: not in enabled drivers build config 00:02:27.498 net/mlx5: not in enabled drivers build config 00:02:27.498 net/mvneta: not in enabled drivers build config 00:02:27.498 net/mvpp2: not in enabled drivers build config 00:02:27.498 net/netvsc: not in enabled drivers build config 00:02:27.498 net/nfb: not in enabled drivers build config 00:02:27.498 net/nfp: not in enabled drivers build config 00:02:27.498 net/ngbe: not in enabled drivers build config 00:02:27.498 net/null: not in enabled drivers build config 00:02:27.498 net/octeontx: not in enabled drivers build config 00:02:27.498 net/octeon_ep: not in enabled drivers build config 00:02:27.498 net/pcap: not in enabled drivers build config 00:02:27.498 net/pfe: not in enabled drivers build config 00:02:27.498 net/qede: not in enabled drivers build config 00:02:27.498 net/ring: not in enabled drivers build config 00:02:27.498 net/sfc: not in enabled drivers build config 00:02:27.498 net/softnic: not in enabled drivers build config 00:02:27.498 net/tap: not in enabled drivers build config 00:02:27.498 net/thunderx: not in enabled drivers build config 00:02:27.498 net/txgbe: not in enabled drivers build config 00:02:27.498 net/vdev_netvsc: not in enabled drivers build config 00:02:27.498 net/vhost: not in enabled drivers build config 00:02:27.498 net/virtio: not in enabled drivers build config 00:02:27.498 net/vmxnet3: not in enabled drivers build config 00:02:27.498 raw/cnxk_bphy: not in enabled drivers build config 00:02:27.498 raw/cnxk_gpio: not in enabled drivers build config 00:02:27.498 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:27.498 raw/ifpga: not in enabled drivers build config 00:02:27.498 raw/ntb: not in enabled drivers build config 00:02:27.498 raw/skeleton: not in enabled drivers build config 00:02:27.498 crypto/armv8: not in enabled drivers build config 00:02:27.498 crypto/bcmfs: not in enabled drivers build config 00:02:27.498 crypto/caam_jr: not in enabled drivers build config 00:02:27.498 crypto/ccp: not in enabled drivers build config 00:02:27.498 crypto/cnxk: not in enabled drivers build config 00:02:27.498 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.498 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.498 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.498 crypto/mlx5: not in enabled drivers build config 00:02:27.498 crypto/mvsam: not in enabled drivers build config 00:02:27.499 crypto/nitrox: not in enabled drivers build config 00:02:27.499 crypto/null: not in enabled drivers build config 00:02:27.499 crypto/octeontx: not in enabled drivers build config 00:02:27.499 crypto/openssl: not in enabled drivers build config 00:02:27.499 crypto/scheduler: not in enabled drivers build config 00:02:27.499 crypto/uadk: not in enabled drivers build config 00:02:27.499 crypto/virtio: not in enabled drivers build config 00:02:27.499 compress/isal: not in enabled drivers build config 00:02:27.499 compress/mlx5: not in enabled drivers build config 00:02:27.499 compress/octeontx: not in enabled drivers build config 00:02:27.499 compress/zlib: not in enabled drivers build config 00:02:27.499 regex/mlx5: not in enabled drivers build config 00:02:27.499 regex/cn9k: not in enabled drivers build config 00:02:27.499 vdpa/ifc: not in enabled drivers build config 00:02:27.499 vdpa/mlx5: not in enabled drivers build config 00:02:27.499 vdpa/sfc: not in enabled drivers build config 00:02:27.499 event/cnxk: not in enabled drivers build config 00:02:27.499 event/dlb2: not in enabled drivers build config 00:02:27.499 event/dpaa: not in enabled drivers build config 00:02:27.499 event/dpaa2: not in enabled drivers build config 00:02:27.499 event/dsw: not in enabled drivers build config 00:02:27.499 event/opdl: not in enabled drivers build config 00:02:27.499 event/skeleton: not in enabled drivers build config 00:02:27.499 event/sw: not in enabled drivers build config 00:02:27.499 event/octeontx: not in enabled drivers build config 00:02:27.499 baseband/acc: not in enabled drivers build config 00:02:27.499 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:27.499 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:27.499 baseband/la12xx: not in enabled drivers build config 00:02:27.499 baseband/null: not in enabled drivers build config 00:02:27.499 baseband/turbo_sw: not in enabled drivers build config 00:02:27.499 gpu/cuda: not in enabled drivers build config 00:02:27.499 00:02:27.499 00:02:27.499 Build targets in project: 314 00:02:27.499 00:02:27.499 DPDK 22.11.4 00:02:27.499 00:02:27.499 User defined options 00:02:27.499 libdir : lib 00:02:27.499 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:27.499 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:27.499 c_link_args : 00:02:27.499 enable_docs : false 00:02:27.499 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.499 enable_kmods : false 00:02:27.499 machine : native 00:02:27.499 tests : false 00:02:27.499 00:02:27.499 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.499 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:27.499 20:43:38 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:27.499 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:27.499 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:27.499 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:27.499 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:27.499 [4/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:27.499 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.499 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.499 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.758 [8/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.758 [9/743] Linking static target lib/librte_kvargs.a 00:02:27.758 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.758 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.758 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.758 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.758 [14/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.758 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:27.758 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.758 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.758 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:27.758 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:28.017 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.017 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.017 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:28.017 [23/743] Linking target lib/librte_kvargs.so.23.0 00:02:28.017 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.017 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.017 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.017 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.017 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.017 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.017 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.017 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.291 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:28.291 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:28.291 [34/743] Linking static target lib/librte_telemetry.a 00:02:28.291 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.291 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.291 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.291 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:28.291 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.291 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.291 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.584 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.584 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.584 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.584 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.584 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:28.584 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.584 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.584 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.584 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.584 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.584 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.584 [53/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:28.842 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.842 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:28.842 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:28.842 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.842 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:28.842 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:28.842 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:28.842 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:28.842 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:28.842 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:28.842 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:28.842 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:28.842 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:28.842 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.842 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.102 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.102 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.102 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.102 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:29.102 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.102 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.102 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.102 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.102 [77/743] Generating lib/rte_eal_def with a custom command 00:02:29.102 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:29.102 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.102 [80/743] Generating lib/rte_ring_def with a custom command 00:02:29.102 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.102 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:29.102 [83/743] Generating lib/rte_rcu_def with a custom command 00:02:29.102 [84/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.102 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.102 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:29.361 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:29.361 [88/743] Linking static target lib/librte_ring.a 00:02:29.361 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.361 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:29.361 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:29.361 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:29.361 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:29.620 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.620 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:29.620 [96/743] Linking static target lib/librte_eal.a 00:02:29.879 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:29.879 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:29.879 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.879 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:29.879 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:30.138 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.138 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.138 [104/743] Linking static target lib/librte_rcu.a 00:02:30.138 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.138 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.138 [107/743] Linking static target lib/librte_mempool.a 00:02:30.396 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.396 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.396 [110/743] Generating lib/rte_net_def with a custom command 00:02:30.396 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:30.396 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:30.396 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:30.396 [114/743] Generating lib/rte_meter_def with a custom command 00:02:30.654 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:30.654 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.654 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.654 [118/743] Linking static target lib/librte_meter.a 00:02:30.654 [119/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.654 [120/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.654 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.916 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.916 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.916 [124/743] Linking static target lib/librte_mbuf.a 00:02:30.916 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.916 [126/743] Linking static target lib/librte_net.a 00:02:31.182 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.182 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.182 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.441 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.442 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.442 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.442 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.442 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:31.699 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.958 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:31.958 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:31.958 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:31.958 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.217 [140/743] Generating lib/rte_pci_def with a custom command 00:02:32.217 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:32.217 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.217 [143/743] Linking static target lib/librte_pci.a 00:02:32.217 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.217 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.217 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.217 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.217 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.475 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.475 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.475 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.475 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.475 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.475 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.475 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.475 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.475 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:32.475 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.475 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:32.475 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:32.733 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:32.733 [162/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.733 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.733 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:32.733 [165/743] Generating lib/rte_hash_def with a custom command 00:02:32.733 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:32.733 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.733 [168/743] Generating lib/rte_timer_def with a custom command 00:02:32.733 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.733 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:32.991 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.991 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.991 [173/743] Linking static target lib/librte_cmdline.a 00:02:33.249 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:33.249 [175/743] Linking static target lib/librte_metrics.a 00:02:33.249 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.249 [177/743] Linking static target lib/librte_timer.a 00:02:33.508 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.508 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.508 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.508 [181/743] Linking static target lib/librte_ethdev.a 00:02:33.765 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:33.765 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.765 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.331 [185/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:34.331 [186/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:34.331 [187/743] Generating lib/rte_acl_def with a custom command 00:02:34.331 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:34.331 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:34.331 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:34.331 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:34.331 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:34.589 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:34.847 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:35.105 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:35.105 [196/743] Linking static target lib/librte_bitratestats.a 00:02:35.105 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:35.373 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.373 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:35.373 [200/743] Linking static target lib/librte_bbdev.a 00:02:35.373 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:35.660 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.660 [203/743] Linking static target lib/librte_hash.a 00:02:35.660 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:35.918 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:35.918 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:02:35.918 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:35.918 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:35.918 [209/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.483 [210/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:36.483 [211/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.483 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:36.483 [213/743] Generating lib/rte_bpf_def with a custom command 00:02:36.483 [214/743] Generating lib/rte_bpf_mingw with a custom command 00:02:36.483 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:36.483 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:36.483 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:36.483 [218/743] Linking static target lib/librte_acl.a 00:02:36.483 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:36.741 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:36.741 [221/743] Linking static target lib/librte_cfgfile.a 00:02:36.741 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:36.741 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:36.741 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:36.741 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.741 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.999 [227/743] Linking target lib/librte_eal.so.23.0 00:02:36.999 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.999 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:36.999 [230/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:36.999 [231/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:36.999 [232/743] Generating lib/rte_cryptodev_def with a custom command 00:02:36.999 [233/743] Linking target lib/librte_ring.so.23.0 00:02:36.999 [234/743] Linking target lib/librte_meter.so.23.0 00:02:36.999 [235/743] Linking target lib/librte_pci.so.23.0 00:02:37.257 [236/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:37.257 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.257 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:37.257 [239/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:37.257 [240/743] Linking target lib/librte_timer.so.23.0 00:02:37.257 [241/743] Linking target lib/librte_rcu.so.23.0 00:02:37.257 [242/743] Linking target lib/librte_acl.so.23.0 00:02:37.257 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:37.257 [244/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:37.257 [245/743] Linking static target lib/librte_bpf.a 00:02:37.257 [246/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:37.257 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:37.257 [248/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.515 [249/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:37.515 [250/743] Linking static target lib/librte_compressdev.a 00:02:37.515 [251/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.515 [252/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:37.515 [253/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:37.515 [254/743] Linking target lib/librte_cfgfile.so.23.0 00:02:37.515 [255/743] Linking target lib/librte_mbuf.so.23.0 00:02:37.515 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:37.515 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:37.515 [258/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:37.515 [259/743] Linking target lib/librte_net.so.23.0 00:02:37.515 [260/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.773 [261/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.773 [262/743] Linking target lib/librte_bbdev.so.23.0 00:02:37.773 [263/743] Generating lib/rte_efd_def with a custom command 00:02:37.773 [264/743] Generating lib/rte_efd_mingw with a custom command 00:02:37.773 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:37.773 [266/743] Linking target lib/librte_cmdline.so.23.0 00:02:37.773 [267/743] Linking target lib/librte_hash.so.23.0 00:02:38.032 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:38.032 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:38.032 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:38.291 [271/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.291 [272/743] Linking target lib/librte_compressdev.so.23.0 00:02:38.291 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.291 [274/743] Linking target lib/librte_ethdev.so.23.0 00:02:38.291 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:38.291 [276/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:38.291 [277/743] Linking static target lib/librte_distributor.a 00:02:38.549 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:38.549 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:38.549 [280/743] Linking target lib/librte_metrics.so.23.0 00:02:38.549 [281/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:38.549 [282/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.808 [283/743] Linking target lib/librte_bitratestats.so.23.0 00:02:38.808 [284/743] Linking target lib/librte_bpf.so.23.0 00:02:38.808 [285/743] Linking target lib/librte_distributor.so.23.0 00:02:38.808 [286/743] Generating lib/rte_eventdev_def with a custom command 00:02:38.808 [287/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:38.808 [288/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:38.808 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:38.808 [290/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:38.808 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:39.066 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:39.066 [293/743] Linking static target lib/librte_efd.a 00:02:39.324 [294/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.324 [295/743] Linking target lib/librte_efd.so.23.0 00:02:39.324 [296/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:39.583 [297/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:39.583 [298/743] Linking static target lib/librte_cryptodev.a 00:02:39.583 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:39.583 [300/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:39.583 [301/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:39.583 [302/743] Generating lib/rte_gro_def with a custom command 00:02:39.841 [303/743] Generating lib/rte_gro_mingw with a custom command 00:02:39.841 [304/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:39.841 [305/743] Linking static target lib/librte_gpudev.a 00:02:39.841 [306/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:39.841 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:40.099 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:40.358 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:40.358 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:40.358 [311/743] Generating lib/rte_gso_def with a custom command 00:02:40.358 [312/743] Generating lib/rte_gso_mingw with a custom command 00:02:40.358 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:40.358 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:40.358 [315/743] Linking static target lib/librte_gro.a 00:02:40.616 [316/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:40.616 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:40.616 [318/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.616 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.616 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:40.616 [321/743] Linking target lib/librte_gpudev.so.23.0 00:02:40.616 [322/743] Linking target lib/librte_gro.so.23.0 00:02:40.874 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:40.874 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:40.874 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:40.874 [326/743] Linking static target lib/librte_eventdev.a 00:02:40.874 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:40.874 [328/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:40.874 [329/743] Linking static target lib/librte_gso.a 00:02:40.874 [330/743] Linking static target lib/librte_jobstats.a 00:02:41.133 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:41.133 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:41.133 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.133 [334/743] Linking target lib/librte_gso.so.23.0 00:02:41.133 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:41.392 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:41.392 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:41.392 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:41.392 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.392 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:41.392 [341/743] Linking target lib/librte_jobstats.so.23.0 00:02:41.392 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:41.392 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:41.392 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:41.392 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:41.650 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:41.650 [347/743] Linking static target lib/librte_ip_frag.a 00:02:41.650 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.909 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:41.909 [350/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.909 [351/743] Linking target lib/librte_ip_frag.so.23.0 00:02:41.909 [352/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:41.909 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:41.909 [354/743] Linking static target lib/librte_latencystats.a 00:02:41.909 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:42.168 [356/743] Generating lib/rte_member_def with a custom command 00:02:42.168 [357/743] Generating lib/rte_member_mingw with a custom command 00:02:42.168 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:42.168 [359/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:42.168 [360/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:42.168 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:42.168 [362/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:42.168 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:42.168 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.168 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:42.168 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:42.168 [367/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:42.426 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.426 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.686 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:42.686 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:42.686 [372/743] Linking static target lib/librte_lpm.a 00:02:42.686 [373/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:42.686 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:42.686 [375/743] Generating lib/rte_power_def with a custom command 00:02:42.686 [376/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.944 [377/743] Generating lib/rte_power_mingw with a custom command 00:02:42.944 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:42.944 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.944 [380/743] Generating lib/rte_rawdev_def with a custom command 00:02:42.945 [381/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:42.945 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:42.945 [383/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.945 [384/743] Generating lib/rte_regexdev_def with a custom command 00:02:42.945 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.945 [386/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:42.945 [387/743] Linking static target lib/librte_pcapng.a 00:02:42.945 [388/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:43.203 [389/743] Linking target lib/librte_lpm.so.23.0 00:02:43.203 [390/743] Generating lib/rte_dmadev_def with a custom command 00:02:43.203 [391/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:43.203 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:43.203 [393/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:43.203 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:43.203 [395/743] Linking static target lib/librte_rawdev.a 00:02:43.203 [396/743] Generating lib/rte_rib_def with a custom command 00:02:43.203 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:43.203 [398/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:43.203 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:43.462 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:43.462 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.462 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:43.462 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.462 [404/743] Linking static target lib/librte_dmadev.a 00:02:43.462 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:43.462 [406/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:43.462 [407/743] Linking static target lib/librte_power.a 00:02:43.720 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.720 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:43.720 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:43.720 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:43.720 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:43.720 [413/743] Linking static target lib/librte_regexdev.a 00:02:43.720 [414/743] Generating lib/rte_sched_def with a custom command 00:02:43.720 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:43.979 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:43.979 [417/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:43.979 [418/743] Linking static target lib/librte_member.a 00:02:43.979 [419/743] Generating lib/rte_security_def with a custom command 00:02:43.979 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:43.979 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.979 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:43.979 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:43.979 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:43.979 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:44.237 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.237 [427/743] Generating lib/rte_stack_def with a custom command 00:02:44.237 [428/743] Linking static target lib/librte_reorder.a 00:02:44.237 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:44.237 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:44.237 [431/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:44.237 [432/743] Linking static target lib/librte_stack.a 00:02:44.237 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.237 [434/743] Linking target lib/librte_member.so.23.0 00:02:44.237 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.507 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.507 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.507 [438/743] Linking target lib/librte_reorder.so.23.0 00:02:44.507 [439/743] Linking target lib/librte_stack.so.23.0 00:02:44.508 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:44.508 [441/743] Linking static target lib/librte_rib.a 00:02:44.508 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.508 [443/743] Linking target lib/librte_power.so.23.0 00:02:44.508 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.508 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:44.780 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.780 [447/743] Linking static target lib/librte_security.a 00:02:44.780 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.039 [449/743] Linking target lib/librte_rib.so.23.0 00:02:45.039 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.039 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:45.039 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:45.039 [453/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:45.039 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.039 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.039 [456/743] Linking target lib/librte_security.so.23.0 00:02:45.297 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.297 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:45.297 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:45.297 [460/743] Linking static target lib/librte_sched.a 00:02:45.864 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.864 [462/743] Linking target lib/librte_sched.so.23.0 00:02:45.864 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:45.864 [464/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:45.864 [465/743] Generating lib/rte_ipsec_def with a custom command 00:02:45.864 [466/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:45.864 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:45.864 [468/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.122 [469/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:46.122 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:46.122 [471/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.381 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:46.639 [473/743] Generating lib/rte_fib_def with a custom command 00:02:46.639 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:46.639 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:46.639 [476/743] Generating lib/rte_fib_mingw with a custom command 00:02:46.639 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:46.639 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:46.897 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:46.897 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:46.897 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:46.897 [482/743] Linking static target lib/librte_ipsec.a 00:02:47.156 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.156 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:47.413 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:47.413 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:47.413 [487/743] Linking static target lib/librte_fib.a 00:02:47.413 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:47.413 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:47.413 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:47.672 [491/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.672 [492/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:47.672 [493/743] Linking target lib/librte_fib.so.23.0 00:02:47.930 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:48.497 [495/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:48.497 [496/743] Generating lib/rte_port_def with a custom command 00:02:48.497 [497/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:48.497 [498/743] Generating lib/rte_port_mingw with a custom command 00:02:48.497 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:48.497 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:48.497 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:48.497 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:48.755 [503/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:48.755 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:49.014 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:49.014 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:49.014 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:49.014 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:49.014 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:49.014 [510/743] Linking static target lib/librte_port.a 00:02:49.582 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:49.582 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:49.582 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.582 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:49.582 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:49.582 [516/743] Linking target lib/librte_port.so.23.0 00:02:49.841 [517/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:49.841 [518/743] Linking static target lib/librte_pdump.a 00:02:49.841 [519/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:49.841 [520/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:50.099 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.099 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:50.099 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:50.099 [524/743] Generating lib/rte_table_def with a custom command 00:02:50.099 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:50.358 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:50.617 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:50.617 [528/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.617 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:50.617 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:50.875 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:50.875 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:50.875 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:50.875 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:50.875 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:50.875 [536/743] Linking static target lib/librte_table.a 00:02:51.134 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:51.393 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:51.651 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:51.651 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.651 [541/743] Linking target lib/librte_table.so.23.0 00:02:51.651 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:51.651 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:51.651 [544/743] Generating lib/rte_graph_def with a custom command 00:02:51.651 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:51.651 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:51.911 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:51.911 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:52.170 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:52.428 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:52.428 [551/743] Linking static target lib/librte_graph.a 00:02:52.428 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:52.687 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:52.687 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:52.687 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:52.945 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:52.945 [557/743] Generating lib/rte_node_def with a custom command 00:02:52.945 [558/743] Generating lib/rte_node_mingw with a custom command 00:02:52.945 [559/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:53.203 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.203 [561/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:53.203 [562/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:53.203 [563/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.203 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.203 [565/743] Linking target lib/librte_graph.so.23.0 00:02:53.462 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.462 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:53.462 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:53.462 [569/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:53.462 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.462 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.462 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:53.462 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:53.462 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:53.462 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:53.462 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:53.722 [577/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:53.722 [578/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:53.722 [579/743] Linking static target lib/librte_node.a 00:02:53.722 [580/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.722 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:53.980 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.980 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:53.980 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.980 [585/743] Linking static target drivers/librte_bus_vdev.a 00:02:53.980 [586/743] Linking target lib/librte_node.so.23.0 00:02:53.980 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:53.980 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.239 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.239 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.239 [591/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:54.239 [592/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:54.239 [593/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.239 [594/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.239 [595/743] Linking static target drivers/librte_bus_pci.a 00:02:54.497 [596/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:54.497 [597/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.497 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:54.497 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:54.756 [600/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.756 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:54.756 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.756 [603/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.756 [604/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:55.014 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.014 [606/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:55.014 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.014 [608/743] Linking static target drivers/librte_mempool_ring.a 00:02:55.014 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.014 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:55.581 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:55.840 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:55.840 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:55.840 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:56.406 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:56.406 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:56.406 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:56.664 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:56.923 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:57.181 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:57.181 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:57.181 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:57.181 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:57.439 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:57.440 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:58.489 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:58.752 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.752 [628/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.752 [629/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.752 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:58.752 [631/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.752 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:59.011 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:59.011 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:59.270 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:59.270 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:59.528 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:59.787 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:59.787 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:59.787 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:00.046 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:00.046 [642/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.046 [643/743] Linking static target lib/librte_vhost.a 00:03:00.046 [644/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:00.046 [645/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:00.304 [646/743] Linking static target drivers/librte_net_i40e.a 00:03:00.304 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:00.304 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:00.304 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:00.563 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:00.563 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:00.821 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:00.821 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.821 [654/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:00.821 [655/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:01.079 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:01.337 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:01.337 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.337 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:01.596 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.596 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.596 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:01.854 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.854 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.854 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.854 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.113 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.113 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.113 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.371 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.638 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:02.638 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:02.638 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:03.205 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:03.463 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.463 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:03.463 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.722 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:03.981 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:03.981 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:03.981 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:03.981 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:04.239 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:04.239 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:04.239 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:04.498 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:04.498 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:04.756 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:04.756 [689/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:04.756 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:04.756 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:05.015 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:05.015 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:05.015 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:05.582 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:05.582 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:05.582 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:05.841 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:05.841 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:06.099 [700/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.099 [701/743] Linking static target lib/librte_pipeline.a 00:03:06.358 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:06.358 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:06.358 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:06.617 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:06.617 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:06.875 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:06.875 [708/743] Linking target app/dpdk-dumpcap 00:03:06.875 [709/743] Linking target app/dpdk-pdump 00:03:06.875 [710/743] Linking target app/dpdk-proc-info 00:03:07.133 [711/743] Linking target app/dpdk-test-bbdev 00:03:07.133 [712/743] Linking target app/dpdk-test-acl 00:03:07.133 [713/743] Linking target app/dpdk-test-cmdline 00:03:07.133 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:07.133 [715/743] Linking target app/dpdk-test-compress-perf 00:03:07.392 [716/743] Linking target app/dpdk-test-crypto-perf 00:03:07.392 [717/743] Linking target app/dpdk-test-eventdev 00:03:07.392 [718/743] Linking target app/dpdk-test-flow-perf 00:03:07.392 [719/743] Linking target app/dpdk-test-fib 00:03:07.651 [720/743] Linking target app/dpdk-test-gpudev 00:03:07.651 [721/743] Linking target app/dpdk-test-pipeline 00:03:07.909 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:08.168 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.426 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:08.426 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:08.426 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:08.426 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:08.685 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.685 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:08.943 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:08.943 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:09.202 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:09.202 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.202 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.202 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:09.460 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:09.719 [737/743] Linking target app/dpdk-test-sad 00:03:09.719 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:09.719 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:09.719 [740/743] Linking target app/dpdk-test-regex 00:03:09.977 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.236 [742/743] Linking target app/dpdk-testpmd 00:03:10.494 [743/743] Linking target app/dpdk-test-security-perf 00:03:10.494 20:44:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:10.494 20:44:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.494 20:44:21 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.494 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.494 [0/1] Installing files. 00:03:10.755 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.755 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:10.759 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.019 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.019 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.019 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.020 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.020 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.020 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.020 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:11.020 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.020 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.283 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.284 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.284 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:11.284 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.284 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:11.284 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.284 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:11.284 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.284 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:11.284 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.284 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:11.284 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.284 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:11.284 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.284 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:11.284 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.284 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:11.284 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.284 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:11.284 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.284 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:11.284 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.284 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:11.284 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.284 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:11.284 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.284 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:11.284 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.284 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:11.284 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.284 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:11.284 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.284 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:11.284 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.284 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:11.284 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.284 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:11.284 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.284 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:11.284 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.284 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:11.284 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.284 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:11.284 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.284 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:11.284 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.284 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:11.284 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.284 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:11.284 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.284 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:11.284 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.284 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:11.284 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:11.284 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:11.285 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:11.285 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:11.285 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:11.285 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:11.285 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:11.285 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:11.285 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:11.285 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:11.285 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:11.285 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:11.285 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.285 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:11.285 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.285 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:11.285 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.285 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:11.285 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.285 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:11.285 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.285 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:11.285 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.285 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:11.285 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.285 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:11.285 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.285 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:11.285 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.285 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:11.285 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.285 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:11.285 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.285 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:11.285 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.285 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:11.285 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.285 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:11.285 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.285 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:11.285 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.285 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:11.285 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.285 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:11.285 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.285 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:11.285 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.285 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:11.285 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.285 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:11.285 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.285 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:11.285 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.285 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:11.285 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.285 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:11.285 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.285 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:11.285 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.285 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:11.285 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.285 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:11.285 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.285 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:11.285 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.285 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:11.285 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:11.285 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:11.285 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:11.285 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:11.285 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:11.285 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:11.285 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:11.285 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:11.285 20:44:22 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:03:11.285 20:44:22 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.285 00:03:11.285 real 0m51.166s 00:03:11.285 user 5m57.457s 00:03:11.285 sys 0m59.375s 00:03:11.285 20:44:22 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:11.285 20:44:22 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:11.285 ************************************ 00:03:11.285 END TEST build_native_dpdk 00:03:11.285 ************************************ 00:03:11.285 20:44:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.285 20:44:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.285 20:44:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.285 20:44:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.285 20:44:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.285 20:44:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.285 20:44:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.285 20:44:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:11.544 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.544 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.544 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.544 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.111 Using 'verbs' RDMA provider 00:03:25.302 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:40.186 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:40.186 Creating mk/config.mk...done. 00:03:40.186 Creating mk/cc.flags.mk...done. 00:03:40.186 Type 'make' to build. 00:03:40.186 20:44:48 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:40.186 20:44:48 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:40.186 20:44:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:40.186 20:44:48 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.186 ************************************ 00:03:40.186 START TEST make 00:03:40.186 ************************************ 00:03:40.186 20:44:48 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:40.186 make[1]: Nothing to be done for 'all'. 00:04:02.112 CC lib/ut/ut.o 00:04:02.112 CC lib/ut_mock/mock.o 00:04:02.112 CC lib/log/log.o 00:04:02.112 CC lib/log/log_flags.o 00:04:02.112 CC lib/log/log_deprecated.o 00:04:02.112 LIB libspdk_log.a 00:04:02.112 LIB libspdk_ut.a 00:04:02.112 LIB libspdk_ut_mock.a 00:04:02.112 SO libspdk_log.so.7.0 00:04:02.112 SO libspdk_ut_mock.so.6.0 00:04:02.112 SO libspdk_ut.so.2.0 00:04:02.112 SYMLINK libspdk_ut_mock.so 00:04:02.112 SYMLINK libspdk_ut.so 00:04:02.112 SYMLINK libspdk_log.so 00:04:02.112 CXX lib/trace_parser/trace.o 00:04:02.112 CC lib/ioat/ioat.o 00:04:02.112 CC lib/dma/dma.o 00:04:02.112 CC lib/util/base64.o 00:04:02.112 CC lib/util/bit_array.o 00:04:02.112 CC lib/util/crc16.o 00:04:02.112 CC lib/util/cpuset.o 00:04:02.112 CC lib/util/crc32.o 00:04:02.112 CC lib/util/crc32c.o 00:04:02.112 CC lib/vfio_user/host/vfio_user_pci.o 00:04:02.112 CC lib/util/crc32_ieee.o 00:04:02.112 CC lib/util/crc64.o 00:04:02.112 CC lib/util/dif.o 00:04:02.112 LIB libspdk_dma.a 00:04:02.112 CC lib/util/fd.o 00:04:02.112 CC lib/util/fd_group.o 00:04:02.112 SO libspdk_dma.so.4.0 00:04:02.112 CC lib/util/file.o 00:04:02.112 CC lib/util/hexlify.o 00:04:02.112 CC lib/util/iov.o 00:04:02.112 SYMLINK libspdk_dma.so 00:04:02.112 CC lib/util/math.o 00:04:02.112 CC lib/vfio_user/host/vfio_user.o 00:04:02.112 CC lib/util/net.o 00:04:02.112 LIB libspdk_ioat.a 00:04:02.112 CC lib/util/pipe.o 00:04:02.112 SO libspdk_ioat.so.7.0 00:04:02.112 CC lib/util/strerror_tls.o 00:04:02.112 SYMLINK libspdk_ioat.so 00:04:02.112 CC lib/util/string.o 00:04:02.112 CC lib/util/uuid.o 00:04:02.112 CC lib/util/xor.o 00:04:02.112 CC lib/util/zipf.o 00:04:02.112 LIB libspdk_vfio_user.a 00:04:02.112 SO libspdk_vfio_user.so.5.0 00:04:02.112 SYMLINK libspdk_vfio_user.so 00:04:02.112 LIB libspdk_util.a 00:04:02.112 SO libspdk_util.so.10.0 00:04:02.112 LIB libspdk_trace_parser.a 00:04:02.112 SYMLINK libspdk_util.so 00:04:02.370 SO libspdk_trace_parser.so.5.0 00:04:02.370 SYMLINK libspdk_trace_parser.so 00:04:02.370 CC lib/rdma_utils/rdma_utils.o 00:04:02.370 CC lib/rdma_provider/common.o 00:04:02.370 CC lib/idxd/idxd.o 00:04:02.370 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:02.370 CC lib/idxd/idxd_user.o 00:04:02.370 CC lib/idxd/idxd_kernel.o 00:04:02.370 CC lib/json/json_parse.o 00:04:02.370 CC lib/vmd/vmd.o 00:04:02.370 CC lib/env_dpdk/env.o 00:04:02.370 CC lib/conf/conf.o 00:04:02.631 CC lib/env_dpdk/memory.o 00:04:02.631 LIB libspdk_rdma_provider.a 00:04:02.631 CC lib/env_dpdk/pci.o 00:04:02.631 SO libspdk_rdma_provider.so.6.0 00:04:02.631 LIB libspdk_conf.a 00:04:02.631 CC lib/json/json_util.o 00:04:02.631 CC lib/env_dpdk/init.o 00:04:02.631 LIB libspdk_rdma_utils.a 00:04:02.631 SO libspdk_conf.so.6.0 00:04:02.631 SO libspdk_rdma_utils.so.1.0 00:04:02.631 SYMLINK libspdk_rdma_provider.so 00:04:02.631 CC lib/env_dpdk/threads.o 00:04:02.631 SYMLINK libspdk_conf.so 00:04:02.631 CC lib/env_dpdk/pci_ioat.o 00:04:02.631 SYMLINK libspdk_rdma_utils.so 00:04:02.631 CC lib/env_dpdk/pci_virtio.o 00:04:02.890 CC lib/env_dpdk/pci_vmd.o 00:04:02.890 CC lib/env_dpdk/pci_idxd.o 00:04:02.890 CC lib/env_dpdk/pci_event.o 00:04:02.890 CC lib/env_dpdk/sigbus_handler.o 00:04:02.890 CC lib/json/json_write.o 00:04:02.890 LIB libspdk_idxd.a 00:04:02.890 CC lib/env_dpdk/pci_dpdk.o 00:04:02.890 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:02.890 SO libspdk_idxd.so.12.0 00:04:02.890 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:03.149 CC lib/vmd/led.o 00:04:03.149 SYMLINK libspdk_idxd.so 00:04:03.149 LIB libspdk_json.a 00:04:03.149 LIB libspdk_vmd.a 00:04:03.149 SO libspdk_json.so.6.0 00:04:03.149 SO libspdk_vmd.so.6.0 00:04:03.407 SYMLINK libspdk_json.so 00:04:03.407 SYMLINK libspdk_vmd.so 00:04:03.666 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:03.666 CC lib/jsonrpc/jsonrpc_server.o 00:04:03.666 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:03.666 CC lib/jsonrpc/jsonrpc_client.o 00:04:03.924 LIB libspdk_jsonrpc.a 00:04:03.924 LIB libspdk_env_dpdk.a 00:04:03.924 SO libspdk_jsonrpc.so.6.0 00:04:03.924 SO libspdk_env_dpdk.so.15.0 00:04:03.924 SYMLINK libspdk_jsonrpc.so 00:04:04.181 SYMLINK libspdk_env_dpdk.so 00:04:04.181 CC lib/rpc/rpc.o 00:04:04.440 LIB libspdk_rpc.a 00:04:04.440 SO libspdk_rpc.so.6.0 00:04:04.440 SYMLINK libspdk_rpc.so 00:04:04.698 CC lib/keyring/keyring.o 00:04:04.698 CC lib/keyring/keyring_rpc.o 00:04:04.698 CC lib/trace/trace.o 00:04:04.698 CC lib/trace/trace_flags.o 00:04:04.698 CC lib/trace/trace_rpc.o 00:04:04.698 CC lib/notify/notify.o 00:04:04.698 CC lib/notify/notify_rpc.o 00:04:04.956 LIB libspdk_notify.a 00:04:04.956 SO libspdk_notify.so.6.0 00:04:04.956 LIB libspdk_keyring.a 00:04:04.956 SO libspdk_keyring.so.1.0 00:04:05.215 SYMLINK libspdk_notify.so 00:04:05.215 LIB libspdk_trace.a 00:04:05.215 SO libspdk_trace.so.10.0 00:04:05.215 SYMLINK libspdk_keyring.so 00:04:05.215 SYMLINK libspdk_trace.so 00:04:05.473 CC lib/thread/thread.o 00:04:05.473 CC lib/thread/iobuf.o 00:04:05.473 CC lib/sock/sock.o 00:04:05.473 CC lib/sock/sock_rpc.o 00:04:06.039 LIB libspdk_sock.a 00:04:06.039 SO libspdk_sock.so.10.0 00:04:06.039 SYMLINK libspdk_sock.so 00:04:06.297 CC lib/nvme/nvme_ctrlr.o 00:04:06.297 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:06.297 CC lib/nvme/nvme_ns_cmd.o 00:04:06.297 CC lib/nvme/nvme_fabric.o 00:04:06.297 CC lib/nvme/nvme_ns.o 00:04:06.297 CC lib/nvme/nvme_pcie_common.o 00:04:06.297 CC lib/nvme/nvme.o 00:04:06.297 CC lib/nvme/nvme_qpair.o 00:04:06.297 CC lib/nvme/nvme_pcie.o 00:04:07.284 CC lib/nvme/nvme_quirks.o 00:04:07.284 CC lib/nvme/nvme_transport.o 00:04:07.284 CC lib/nvme/nvme_discovery.o 00:04:07.284 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:07.284 LIB libspdk_thread.a 00:04:07.284 SO libspdk_thread.so.10.1 00:04:07.284 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:07.284 CC lib/nvme/nvme_tcp.o 00:04:07.284 SYMLINK libspdk_thread.so 00:04:07.284 CC lib/nvme/nvme_opal.o 00:04:07.284 CC lib/nvme/nvme_io_msg.o 00:04:07.284 CC lib/nvme/nvme_poll_group.o 00:04:07.543 CC lib/nvme/nvme_zns.o 00:04:07.802 CC lib/nvme/nvme_stubs.o 00:04:07.802 CC lib/nvme/nvme_auth.o 00:04:07.802 CC lib/nvme/nvme_cuse.o 00:04:07.802 CC lib/nvme/nvme_rdma.o 00:04:08.061 CC lib/accel/accel.o 00:04:08.061 CC lib/accel/accel_rpc.o 00:04:08.061 CC lib/blob/blobstore.o 00:04:08.320 CC lib/accel/accel_sw.o 00:04:08.320 CC lib/init/json_config.o 00:04:08.579 CC lib/virtio/virtio.o 00:04:08.579 CC lib/init/subsystem.o 00:04:08.579 CC lib/init/subsystem_rpc.o 00:04:08.579 CC lib/init/rpc.o 00:04:08.579 CC lib/blob/request.o 00:04:08.579 CC lib/virtio/virtio_vhost_user.o 00:04:08.837 CC lib/virtio/virtio_vfio_user.o 00:04:08.837 CC lib/virtio/virtio_pci.o 00:04:08.837 CC lib/blob/zeroes.o 00:04:08.837 CC lib/blob/blob_bs_dev.o 00:04:08.837 LIB libspdk_init.a 00:04:08.837 LIB libspdk_accel.a 00:04:08.837 SO libspdk_init.so.5.0 00:04:08.837 SO libspdk_accel.so.16.0 00:04:08.837 SYMLINK libspdk_init.so 00:04:09.096 SYMLINK libspdk_accel.so 00:04:09.096 LIB libspdk_virtio.a 00:04:09.096 CC lib/event/app.o 00:04:09.096 CC lib/event/app_rpc.o 00:04:09.096 CC lib/event/reactor.o 00:04:09.096 CC lib/event/log_rpc.o 00:04:09.096 CC lib/event/scheduler_static.o 00:04:09.096 SO libspdk_virtio.so.7.0 00:04:09.096 CC lib/bdev/bdev.o 00:04:09.096 CC lib/bdev/bdev_rpc.o 00:04:09.355 SYMLINK libspdk_virtio.so 00:04:09.355 CC lib/bdev/bdev_zone.o 00:04:09.355 LIB libspdk_nvme.a 00:04:09.355 CC lib/bdev/part.o 00:04:09.355 CC lib/bdev/scsi_nvme.o 00:04:09.613 SO libspdk_nvme.so.13.1 00:04:09.613 LIB libspdk_event.a 00:04:09.613 SO libspdk_event.so.14.0 00:04:09.613 SYMLINK libspdk_event.so 00:04:09.871 SYMLINK libspdk_nvme.so 00:04:11.246 LIB libspdk_blob.a 00:04:11.246 SO libspdk_blob.so.11.0 00:04:11.246 SYMLINK libspdk_blob.so 00:04:11.504 CC lib/lvol/lvol.o 00:04:11.504 CC lib/blobfs/blobfs.o 00:04:11.504 CC lib/blobfs/tree.o 00:04:12.070 LIB libspdk_bdev.a 00:04:12.070 SO libspdk_bdev.so.16.0 00:04:12.070 SYMLINK libspdk_bdev.so 00:04:12.328 CC lib/scsi/dev.o 00:04:12.329 CC lib/scsi/lun.o 00:04:12.329 CC lib/scsi/scsi.o 00:04:12.329 CC lib/scsi/port.o 00:04:12.329 CC lib/ftl/ftl_core.o 00:04:12.329 CC lib/ublk/ublk.o 00:04:12.329 CC lib/nvmf/ctrlr.o 00:04:12.329 CC lib/nbd/nbd.o 00:04:12.329 LIB libspdk_blobfs.a 00:04:12.329 SO libspdk_blobfs.so.10.0 00:04:12.329 CC lib/scsi/scsi_bdev.o 00:04:12.329 CC lib/ublk/ublk_rpc.o 00:04:12.329 LIB libspdk_lvol.a 00:04:12.329 SYMLINK libspdk_blobfs.so 00:04:12.587 CC lib/scsi/scsi_pr.o 00:04:12.587 SO libspdk_lvol.so.10.0 00:04:12.587 CC lib/scsi/scsi_rpc.o 00:04:12.587 SYMLINK libspdk_lvol.so 00:04:12.587 CC lib/scsi/task.o 00:04:12.587 CC lib/ftl/ftl_init.o 00:04:12.587 CC lib/ftl/ftl_layout.o 00:04:12.587 CC lib/ftl/ftl_debug.o 00:04:12.845 CC lib/ftl/ftl_io.o 00:04:12.845 CC lib/nbd/nbd_rpc.o 00:04:12.845 CC lib/nvmf/ctrlr_discovery.o 00:04:12.845 CC lib/nvmf/ctrlr_bdev.o 00:04:12.845 CC lib/ftl/ftl_sb.o 00:04:12.845 CC lib/ftl/ftl_l2p.o 00:04:12.845 LIB libspdk_scsi.a 00:04:12.845 LIB libspdk_nbd.a 00:04:12.845 LIB libspdk_ublk.a 00:04:12.845 SO libspdk_nbd.so.7.0 00:04:12.845 SO libspdk_scsi.so.9.0 00:04:13.103 CC lib/ftl/ftl_l2p_flat.o 00:04:13.103 CC lib/nvmf/subsystem.o 00:04:13.103 SO libspdk_ublk.so.3.0 00:04:13.103 SYMLINK libspdk_nbd.so 00:04:13.103 CC lib/nvmf/nvmf.o 00:04:13.103 CC lib/nvmf/nvmf_rpc.o 00:04:13.103 SYMLINK libspdk_scsi.so 00:04:13.103 CC lib/nvmf/transport.o 00:04:13.103 SYMLINK libspdk_ublk.so 00:04:13.103 CC lib/ftl/ftl_nv_cache.o 00:04:13.103 CC lib/ftl/ftl_band.o 00:04:13.361 CC lib/iscsi/conn.o 00:04:13.361 CC lib/iscsi/init_grp.o 00:04:13.361 CC lib/nvmf/tcp.o 00:04:13.619 CC lib/iscsi/iscsi.o 00:04:13.619 CC lib/iscsi/md5.o 00:04:13.877 CC lib/nvmf/stubs.o 00:04:13.877 CC lib/ftl/ftl_band_ops.o 00:04:13.877 CC lib/ftl/ftl_writer.o 00:04:13.877 CC lib/ftl/ftl_rq.o 00:04:13.877 CC lib/nvmf/mdns_server.o 00:04:14.134 CC lib/ftl/ftl_reloc.o 00:04:14.134 CC lib/nvmf/rdma.o 00:04:14.134 CC lib/nvmf/auth.o 00:04:14.134 CC lib/ftl/ftl_l2p_cache.o 00:04:14.134 CC lib/vhost/vhost.o 00:04:14.134 CC lib/vhost/vhost_rpc.o 00:04:14.134 CC lib/vhost/vhost_scsi.o 00:04:14.392 CC lib/vhost/vhost_blk.o 00:04:14.392 CC lib/vhost/rte_vhost_user.o 00:04:14.650 CC lib/ftl/ftl_p2l.o 00:04:14.908 CC lib/ftl/mngt/ftl_mngt.o 00:04:14.908 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:14.908 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:14.908 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:15.166 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:15.166 CC lib/iscsi/param.o 00:04:15.166 CC lib/iscsi/portal_grp.o 00:04:15.166 CC lib/iscsi/tgt_node.o 00:04:15.166 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:15.166 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:15.166 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:15.424 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:15.424 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:15.424 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:15.424 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:15.424 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:15.424 CC lib/ftl/utils/ftl_conf.o 00:04:15.682 CC lib/ftl/utils/ftl_md.o 00:04:15.682 CC lib/iscsi/iscsi_subsystem.o 00:04:15.682 CC lib/ftl/utils/ftl_mempool.o 00:04:15.682 LIB libspdk_vhost.a 00:04:15.682 CC lib/ftl/utils/ftl_bitmap.o 00:04:15.682 CC lib/ftl/utils/ftl_property.o 00:04:15.682 CC lib/iscsi/iscsi_rpc.o 00:04:15.682 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:15.682 SO libspdk_vhost.so.8.0 00:04:15.940 CC lib/iscsi/task.o 00:04:15.940 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:15.940 SYMLINK libspdk_vhost.so 00:04:15.940 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:15.940 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:15.940 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:15.940 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:15.940 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:16.198 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:16.198 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:16.198 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:16.198 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:16.198 CC lib/ftl/base/ftl_base_dev.o 00:04:16.198 CC lib/ftl/base/ftl_base_bdev.o 00:04:16.198 CC lib/ftl/ftl_trace.o 00:04:16.198 LIB libspdk_iscsi.a 00:04:16.198 LIB libspdk_nvmf.a 00:04:16.198 SO libspdk_iscsi.so.8.0 00:04:16.198 SO libspdk_nvmf.so.19.0 00:04:16.456 LIB libspdk_ftl.a 00:04:16.456 SYMLINK libspdk_iscsi.so 00:04:16.456 SYMLINK libspdk_nvmf.so 00:04:16.717 SO libspdk_ftl.so.9.0 00:04:16.975 SYMLINK libspdk_ftl.so 00:04:17.233 CC module/env_dpdk/env_dpdk_rpc.o 00:04:17.492 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:17.492 CC module/accel/error/accel_error.o 00:04:17.492 CC module/sock/uring/uring.o 00:04:17.492 CC module/scheduler/gscheduler/gscheduler.o 00:04:17.492 CC module/blob/bdev/blob_bdev.o 00:04:17.492 CC module/sock/posix/posix.o 00:04:17.492 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:17.492 CC module/keyring/file/keyring.o 00:04:17.492 CC module/keyring/linux/keyring.o 00:04:17.492 LIB libspdk_env_dpdk_rpc.a 00:04:17.492 SO libspdk_env_dpdk_rpc.so.6.0 00:04:17.492 SYMLINK libspdk_env_dpdk_rpc.so 00:04:17.492 CC module/keyring/file/keyring_rpc.o 00:04:17.492 LIB libspdk_scheduler_gscheduler.a 00:04:17.492 CC module/keyring/linux/keyring_rpc.o 00:04:17.492 SO libspdk_scheduler_gscheduler.so.4.0 00:04:17.492 CC module/accel/error/accel_error_rpc.o 00:04:17.492 LIB libspdk_scheduler_dynamic.a 00:04:17.750 LIB libspdk_scheduler_dpdk_governor.a 00:04:17.750 SO libspdk_scheduler_dynamic.so.4.0 00:04:17.750 SYMLINK libspdk_scheduler_gscheduler.so 00:04:17.750 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:17.750 LIB libspdk_keyring_file.a 00:04:17.750 SYMLINK libspdk_scheduler_dynamic.so 00:04:17.750 LIB libspdk_blob_bdev.a 00:04:17.750 LIB libspdk_keyring_linux.a 00:04:17.750 SO libspdk_keyring_file.so.1.0 00:04:17.750 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:17.750 LIB libspdk_accel_error.a 00:04:17.750 SO libspdk_keyring_linux.so.1.0 00:04:17.750 SO libspdk_blob_bdev.so.11.0 00:04:17.750 CC module/accel/ioat/accel_ioat.o 00:04:17.750 SO libspdk_accel_error.so.2.0 00:04:17.750 SYMLINK libspdk_keyring_file.so 00:04:17.750 CC module/accel/ioat/accel_ioat_rpc.o 00:04:17.750 SYMLINK libspdk_blob_bdev.so 00:04:17.750 SYMLINK libspdk_keyring_linux.so 00:04:17.750 SYMLINK libspdk_accel_error.so 00:04:17.750 CC module/accel/dsa/accel_dsa.o 00:04:17.750 CC module/accel/dsa/accel_dsa_rpc.o 00:04:18.009 CC module/accel/iaa/accel_iaa.o 00:04:18.009 CC module/accel/iaa/accel_iaa_rpc.o 00:04:18.009 LIB libspdk_accel_ioat.a 00:04:18.009 SO libspdk_accel_ioat.so.6.0 00:04:18.009 LIB libspdk_sock_uring.a 00:04:18.009 SYMLINK libspdk_accel_ioat.so 00:04:18.009 CC module/blobfs/bdev/blobfs_bdev.o 00:04:18.009 CC module/bdev/delay/vbdev_delay.o 00:04:18.009 LIB libspdk_accel_iaa.a 00:04:18.269 LIB libspdk_accel_dsa.a 00:04:18.269 CC module/bdev/error/vbdev_error.o 00:04:18.269 SO libspdk_sock_uring.so.5.0 00:04:18.269 SO libspdk_accel_iaa.so.3.0 00:04:18.269 LIB libspdk_sock_posix.a 00:04:18.269 SO libspdk_accel_dsa.so.5.0 00:04:18.269 SO libspdk_sock_posix.so.6.0 00:04:18.269 CC module/bdev/gpt/gpt.o 00:04:18.269 SYMLINK libspdk_sock_uring.so 00:04:18.269 CC module/bdev/gpt/vbdev_gpt.o 00:04:18.269 SYMLINK libspdk_accel_iaa.so 00:04:18.269 CC module/bdev/error/vbdev_error_rpc.o 00:04:18.269 SYMLINK libspdk_accel_dsa.so 00:04:18.269 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:18.269 CC module/bdev/lvol/vbdev_lvol.o 00:04:18.269 CC module/bdev/malloc/bdev_malloc.o 00:04:18.269 SYMLINK libspdk_sock_posix.so 00:04:18.269 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:18.269 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:18.528 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:18.528 LIB libspdk_bdev_error.a 00:04:18.528 SO libspdk_bdev_error.so.6.0 00:04:18.528 LIB libspdk_blobfs_bdev.a 00:04:18.528 SYMLINK libspdk_bdev_error.so 00:04:18.528 LIB libspdk_bdev_gpt.a 00:04:18.528 LIB libspdk_bdev_delay.a 00:04:18.528 SO libspdk_blobfs_bdev.so.6.0 00:04:18.528 SO libspdk_bdev_gpt.so.6.0 00:04:18.528 SO libspdk_bdev_delay.so.6.0 00:04:18.528 CC module/bdev/null/bdev_null.o 00:04:18.528 SYMLINK libspdk_bdev_gpt.so 00:04:18.528 SYMLINK libspdk_blobfs_bdev.so 00:04:18.528 SYMLINK libspdk_bdev_delay.so 00:04:18.528 CC module/bdev/nvme/bdev_nvme.o 00:04:18.528 CC module/bdev/null/bdev_null_rpc.o 00:04:18.786 LIB libspdk_bdev_malloc.a 00:04:18.786 CC module/bdev/passthru/vbdev_passthru.o 00:04:18.786 CC module/bdev/raid/bdev_raid.o 00:04:18.786 SO libspdk_bdev_malloc.so.6.0 00:04:18.786 SYMLINK libspdk_bdev_malloc.so 00:04:18.786 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:18.786 CC module/bdev/split/vbdev_split.o 00:04:18.786 CC module/bdev/split/vbdev_split_rpc.o 00:04:18.786 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:18.786 LIB libspdk_bdev_lvol.a 00:04:18.786 LIB libspdk_bdev_null.a 00:04:18.786 SO libspdk_bdev_lvol.so.6.0 00:04:18.786 SO libspdk_bdev_null.so.6.0 00:04:19.045 SYMLINK libspdk_bdev_lvol.so 00:04:19.045 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:19.045 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:19.045 SYMLINK libspdk_bdev_null.so 00:04:19.045 CC module/bdev/uring/bdev_uring.o 00:04:19.045 CC module/bdev/raid/bdev_raid_rpc.o 00:04:19.045 LIB libspdk_bdev_passthru.a 00:04:19.045 SO libspdk_bdev_passthru.so.6.0 00:04:19.045 LIB libspdk_bdev_split.a 00:04:19.045 SO libspdk_bdev_split.so.6.0 00:04:19.045 SYMLINK libspdk_bdev_passthru.so 00:04:19.045 CC module/bdev/raid/bdev_raid_sb.o 00:04:19.045 CC module/bdev/nvme/nvme_rpc.o 00:04:19.045 LIB libspdk_bdev_zone_block.a 00:04:19.045 SYMLINK libspdk_bdev_split.so 00:04:19.303 CC module/bdev/uring/bdev_uring_rpc.o 00:04:19.303 CC module/bdev/aio/bdev_aio.o 00:04:19.303 SO libspdk_bdev_zone_block.so.6.0 00:04:19.303 CC module/bdev/aio/bdev_aio_rpc.o 00:04:19.303 SYMLINK libspdk_bdev_zone_block.so 00:04:19.303 CC module/bdev/raid/raid0.o 00:04:19.303 CC module/bdev/nvme/bdev_mdns_client.o 00:04:19.303 CC module/bdev/nvme/vbdev_opal.o 00:04:19.303 LIB libspdk_bdev_uring.a 00:04:19.303 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:19.562 CC module/bdev/ftl/bdev_ftl.o 00:04:19.562 SO libspdk_bdev_uring.so.6.0 00:04:19.562 SYMLINK libspdk_bdev_uring.so 00:04:19.562 LIB libspdk_bdev_aio.a 00:04:19.562 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:19.562 SO libspdk_bdev_aio.so.6.0 00:04:19.562 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:19.562 SYMLINK libspdk_bdev_aio.so 00:04:19.562 CC module/bdev/raid/raid1.o 00:04:19.562 CC module/bdev/raid/concat.o 00:04:19.820 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:19.820 CC module/bdev/iscsi/bdev_iscsi.o 00:04:19.820 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:19.820 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:19.820 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:19.820 LIB libspdk_bdev_ftl.a 00:04:19.820 SO libspdk_bdev_ftl.so.6.0 00:04:20.079 LIB libspdk_bdev_raid.a 00:04:20.079 SYMLINK libspdk_bdev_ftl.so 00:04:20.079 SO libspdk_bdev_raid.so.6.0 00:04:20.079 SYMLINK libspdk_bdev_raid.so 00:04:20.079 LIB libspdk_bdev_iscsi.a 00:04:20.338 SO libspdk_bdev_iscsi.so.6.0 00:04:20.338 SYMLINK libspdk_bdev_iscsi.so 00:04:20.338 LIB libspdk_bdev_virtio.a 00:04:20.338 SO libspdk_bdev_virtio.so.6.0 00:04:20.596 SYMLINK libspdk_bdev_virtio.so 00:04:20.855 LIB libspdk_bdev_nvme.a 00:04:20.855 SO libspdk_bdev_nvme.so.7.0 00:04:21.114 SYMLINK libspdk_bdev_nvme.so 00:04:21.681 CC module/event/subsystems/keyring/keyring.o 00:04:21.681 CC module/event/subsystems/scheduler/scheduler.o 00:04:21.681 CC module/event/subsystems/vmd/vmd.o 00:04:21.681 CC module/event/subsystems/iobuf/iobuf.o 00:04:21.681 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:21.681 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:21.681 CC module/event/subsystems/sock/sock.o 00:04:21.681 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:21.681 LIB libspdk_event_keyring.a 00:04:21.681 LIB libspdk_event_scheduler.a 00:04:21.681 LIB libspdk_event_vmd.a 00:04:21.681 LIB libspdk_event_sock.a 00:04:21.681 LIB libspdk_event_iobuf.a 00:04:21.681 LIB libspdk_event_vhost_blk.a 00:04:21.681 SO libspdk_event_keyring.so.1.0 00:04:21.681 SO libspdk_event_scheduler.so.4.0 00:04:21.681 SO libspdk_event_sock.so.5.0 00:04:21.681 SO libspdk_event_vmd.so.6.0 00:04:21.681 SO libspdk_event_vhost_blk.so.3.0 00:04:21.940 SO libspdk_event_iobuf.so.3.0 00:04:21.940 SYMLINK libspdk_event_keyring.so 00:04:21.940 SYMLINK libspdk_event_scheduler.so 00:04:21.940 SYMLINK libspdk_event_sock.so 00:04:21.940 SYMLINK libspdk_event_vhost_blk.so 00:04:21.940 SYMLINK libspdk_event_vmd.so 00:04:21.940 SYMLINK libspdk_event_iobuf.so 00:04:22.199 CC module/event/subsystems/accel/accel.o 00:04:22.457 LIB libspdk_event_accel.a 00:04:22.457 SO libspdk_event_accel.so.6.0 00:04:22.457 SYMLINK libspdk_event_accel.so 00:04:22.715 CC module/event/subsystems/bdev/bdev.o 00:04:22.973 LIB libspdk_event_bdev.a 00:04:22.973 SO libspdk_event_bdev.so.6.0 00:04:22.973 SYMLINK libspdk_event_bdev.so 00:04:23.232 CC module/event/subsystems/nbd/nbd.o 00:04:23.232 CC module/event/subsystems/scsi/scsi.o 00:04:23.232 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:23.232 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:23.232 CC module/event/subsystems/ublk/ublk.o 00:04:23.490 LIB libspdk_event_nbd.a 00:04:23.491 LIB libspdk_event_ublk.a 00:04:23.491 LIB libspdk_event_scsi.a 00:04:23.491 SO libspdk_event_nbd.so.6.0 00:04:23.491 SO libspdk_event_ublk.so.3.0 00:04:23.491 SO libspdk_event_scsi.so.6.0 00:04:23.491 SYMLINK libspdk_event_nbd.so 00:04:23.749 LIB libspdk_event_nvmf.a 00:04:23.749 SYMLINK libspdk_event_ublk.so 00:04:23.749 SYMLINK libspdk_event_scsi.so 00:04:23.749 SO libspdk_event_nvmf.so.6.0 00:04:23.749 SYMLINK libspdk_event_nvmf.so 00:04:24.008 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:24.008 CC module/event/subsystems/iscsi/iscsi.o 00:04:24.008 LIB libspdk_event_vhost_scsi.a 00:04:24.008 SO libspdk_event_vhost_scsi.so.3.0 00:04:24.008 LIB libspdk_event_iscsi.a 00:04:24.276 SO libspdk_event_iscsi.so.6.0 00:04:24.276 SYMLINK libspdk_event_vhost_scsi.so 00:04:24.276 SYMLINK libspdk_event_iscsi.so 00:04:24.276 SO libspdk.so.6.0 00:04:24.276 SYMLINK libspdk.so 00:04:24.549 CC app/trace_record/trace_record.o 00:04:24.549 CC app/spdk_lspci/spdk_lspci.o 00:04:24.549 CXX app/trace/trace.o 00:04:24.807 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:24.807 CC app/nvmf_tgt/nvmf_main.o 00:04:24.807 CC app/iscsi_tgt/iscsi_tgt.o 00:04:24.807 CC app/spdk_tgt/spdk_tgt.o 00:04:24.807 CC examples/util/zipf/zipf.o 00:04:24.807 CC examples/ioat/perf/perf.o 00:04:24.807 CC test/thread/poller_perf/poller_perf.o 00:04:24.807 LINK spdk_lspci 00:04:25.066 LINK nvmf_tgt 00:04:25.066 LINK interrupt_tgt 00:04:25.066 LINK iscsi_tgt 00:04:25.066 LINK poller_perf 00:04:25.066 LINK spdk_trace_record 00:04:25.066 LINK zipf 00:04:25.066 LINK ioat_perf 00:04:25.066 LINK spdk_tgt 00:04:25.066 CC examples/ioat/verify/verify.o 00:04:25.066 LINK spdk_trace 00:04:25.325 CC app/spdk_nvme_identify/identify.o 00:04:25.325 CC app/spdk_nvme_perf/perf.o 00:04:25.325 CC app/spdk_nvme_discover/discovery_aer.o 00:04:25.325 CC app/spdk_top/spdk_top.o 00:04:25.325 TEST_HEADER include/spdk/accel.h 00:04:25.325 TEST_HEADER include/spdk/accel_module.h 00:04:25.325 TEST_HEADER include/spdk/assert.h 00:04:25.325 TEST_HEADER include/spdk/barrier.h 00:04:25.325 TEST_HEADER include/spdk/base64.h 00:04:25.325 TEST_HEADER include/spdk/bdev.h 00:04:25.325 TEST_HEADER include/spdk/bdev_module.h 00:04:25.325 TEST_HEADER include/spdk/bdev_zone.h 00:04:25.325 TEST_HEADER include/spdk/bit_array.h 00:04:25.325 TEST_HEADER include/spdk/bit_pool.h 00:04:25.325 TEST_HEADER include/spdk/blob_bdev.h 00:04:25.325 LINK verify 00:04:25.325 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:25.325 TEST_HEADER include/spdk/blobfs.h 00:04:25.325 TEST_HEADER include/spdk/blob.h 00:04:25.325 TEST_HEADER include/spdk/conf.h 00:04:25.325 TEST_HEADER include/spdk/config.h 00:04:25.325 TEST_HEADER include/spdk/cpuset.h 00:04:25.325 TEST_HEADER include/spdk/crc16.h 00:04:25.325 TEST_HEADER include/spdk/crc32.h 00:04:25.325 TEST_HEADER include/spdk/crc64.h 00:04:25.325 TEST_HEADER include/spdk/dif.h 00:04:25.325 TEST_HEADER include/spdk/dma.h 00:04:25.325 TEST_HEADER include/spdk/endian.h 00:04:25.325 TEST_HEADER include/spdk/env_dpdk.h 00:04:25.325 TEST_HEADER include/spdk/env.h 00:04:25.325 TEST_HEADER include/spdk/event.h 00:04:25.325 TEST_HEADER include/spdk/fd_group.h 00:04:25.325 TEST_HEADER include/spdk/fd.h 00:04:25.325 TEST_HEADER include/spdk/file.h 00:04:25.325 TEST_HEADER include/spdk/ftl.h 00:04:25.325 TEST_HEADER include/spdk/gpt_spec.h 00:04:25.325 TEST_HEADER include/spdk/hexlify.h 00:04:25.325 TEST_HEADER include/spdk/histogram_data.h 00:04:25.325 TEST_HEADER include/spdk/idxd.h 00:04:25.325 TEST_HEADER include/spdk/idxd_spec.h 00:04:25.325 TEST_HEADER include/spdk/init.h 00:04:25.325 CC app/spdk_dd/spdk_dd.o 00:04:25.325 TEST_HEADER include/spdk/ioat.h 00:04:25.325 CC test/dma/test_dma/test_dma.o 00:04:25.325 TEST_HEADER include/spdk/ioat_spec.h 00:04:25.325 TEST_HEADER include/spdk/iscsi_spec.h 00:04:25.325 TEST_HEADER include/spdk/json.h 00:04:25.325 TEST_HEADER include/spdk/jsonrpc.h 00:04:25.325 TEST_HEADER include/spdk/keyring.h 00:04:25.325 TEST_HEADER include/spdk/keyring_module.h 00:04:25.325 TEST_HEADER include/spdk/likely.h 00:04:25.325 TEST_HEADER include/spdk/log.h 00:04:25.325 TEST_HEADER include/spdk/lvol.h 00:04:25.325 TEST_HEADER include/spdk/memory.h 00:04:25.325 TEST_HEADER include/spdk/mmio.h 00:04:25.325 TEST_HEADER include/spdk/nbd.h 00:04:25.325 TEST_HEADER include/spdk/net.h 00:04:25.326 TEST_HEADER include/spdk/notify.h 00:04:25.326 TEST_HEADER include/spdk/nvme.h 00:04:25.326 TEST_HEADER include/spdk/nvme_intel.h 00:04:25.326 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:25.326 CC test/app/bdev_svc/bdev_svc.o 00:04:25.326 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:25.326 TEST_HEADER include/spdk/nvme_spec.h 00:04:25.326 TEST_HEADER include/spdk/nvme_zns.h 00:04:25.326 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:25.326 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:25.326 TEST_HEADER include/spdk/nvmf.h 00:04:25.326 TEST_HEADER include/spdk/nvmf_spec.h 00:04:25.326 TEST_HEADER include/spdk/nvmf_transport.h 00:04:25.326 TEST_HEADER include/spdk/opal.h 00:04:25.326 TEST_HEADER include/spdk/opal_spec.h 00:04:25.584 TEST_HEADER include/spdk/pci_ids.h 00:04:25.584 TEST_HEADER include/spdk/pipe.h 00:04:25.584 TEST_HEADER include/spdk/queue.h 00:04:25.584 TEST_HEADER include/spdk/reduce.h 00:04:25.584 TEST_HEADER include/spdk/rpc.h 00:04:25.584 TEST_HEADER include/spdk/scheduler.h 00:04:25.584 TEST_HEADER include/spdk/scsi.h 00:04:25.584 TEST_HEADER include/spdk/scsi_spec.h 00:04:25.584 TEST_HEADER include/spdk/sock.h 00:04:25.584 TEST_HEADER include/spdk/stdinc.h 00:04:25.584 TEST_HEADER include/spdk/string.h 00:04:25.584 TEST_HEADER include/spdk/thread.h 00:04:25.584 TEST_HEADER include/spdk/trace.h 00:04:25.584 TEST_HEADER include/spdk/trace_parser.h 00:04:25.584 TEST_HEADER include/spdk/tree.h 00:04:25.584 TEST_HEADER include/spdk/ublk.h 00:04:25.584 LINK spdk_nvme_discover 00:04:25.584 TEST_HEADER include/spdk/util.h 00:04:25.584 TEST_HEADER include/spdk/uuid.h 00:04:25.584 TEST_HEADER include/spdk/version.h 00:04:25.584 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:25.585 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:25.585 TEST_HEADER include/spdk/vhost.h 00:04:25.585 CC app/fio/nvme/fio_plugin.o 00:04:25.585 TEST_HEADER include/spdk/vmd.h 00:04:25.585 TEST_HEADER include/spdk/xor.h 00:04:25.585 TEST_HEADER include/spdk/zipf.h 00:04:25.585 CXX test/cpp_headers/accel.o 00:04:25.585 LINK bdev_svc 00:04:25.843 CXX test/cpp_headers/accel_module.o 00:04:25.843 LINK test_dma 00:04:25.843 CC examples/thread/thread/thread_ex.o 00:04:25.843 LINK spdk_dd 00:04:25.843 CC app/vhost/vhost.o 00:04:25.843 CXX test/cpp_headers/assert.o 00:04:26.101 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:26.101 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:26.101 LINK spdk_nvme 00:04:26.101 LINK vhost 00:04:26.101 LINK thread 00:04:26.102 LINK spdk_nvme_perf 00:04:26.102 CXX test/cpp_headers/barrier.o 00:04:26.102 LINK spdk_top 00:04:26.102 LINK spdk_nvme_identify 00:04:26.360 CC app/fio/bdev/fio_plugin.o 00:04:26.360 CC test/env/mem_callbacks/mem_callbacks.o 00:04:26.360 CXX test/cpp_headers/base64.o 00:04:26.360 CXX test/cpp_headers/bdev.o 00:04:26.360 CXX test/cpp_headers/bdev_module.o 00:04:26.360 CC test/env/vtophys/vtophys.o 00:04:26.360 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:26.360 LINK nvme_fuzz 00:04:26.619 CC examples/sock/hello_world/hello_sock.o 00:04:26.619 LINK mem_callbacks 00:04:26.619 LINK vtophys 00:04:26.619 CC test/app/histogram_perf/histogram_perf.o 00:04:26.619 CXX test/cpp_headers/bdev_zone.o 00:04:26.619 LINK env_dpdk_post_init 00:04:26.619 CXX test/cpp_headers/bit_array.o 00:04:26.619 CXX test/cpp_headers/bit_pool.o 00:04:26.619 CC examples/vmd/lsvmd/lsvmd.o 00:04:26.619 LINK histogram_perf 00:04:26.877 LINK hello_sock 00:04:26.877 CC test/env/memory/memory_ut.o 00:04:26.877 LINK spdk_bdev 00:04:26.877 CXX test/cpp_headers/blob_bdev.o 00:04:26.877 CC test/env/pci/pci_ut.o 00:04:26.877 LINK lsvmd 00:04:26.877 CC test/app/jsoncat/jsoncat.o 00:04:26.877 CC test/event/event_perf/event_perf.o 00:04:26.877 CC test/event/reactor/reactor.o 00:04:27.136 CC test/event/reactor_perf/reactor_perf.o 00:04:27.136 CXX test/cpp_headers/blobfs_bdev.o 00:04:27.136 LINK jsoncat 00:04:27.136 CC examples/vmd/led/led.o 00:04:27.136 LINK event_perf 00:04:27.136 LINK reactor 00:04:27.136 CC test/nvme/aer/aer.o 00:04:27.136 LINK reactor_perf 00:04:27.136 CXX test/cpp_headers/blobfs.o 00:04:27.136 LINK pci_ut 00:04:27.395 CC test/nvme/reset/reset.o 00:04:27.395 LINK led 00:04:27.395 CXX test/cpp_headers/blob.o 00:04:27.395 CC test/nvme/sgl/sgl.o 00:04:27.395 CC test/nvme/e2edp/nvme_dp.o 00:04:27.395 CC test/event/app_repeat/app_repeat.o 00:04:27.395 LINK aer 00:04:27.395 LINK memory_ut 00:04:27.653 LINK reset 00:04:27.653 CXX test/cpp_headers/conf.o 00:04:27.653 LINK app_repeat 00:04:27.653 CC examples/idxd/perf/perf.o 00:04:27.653 CXX test/cpp_headers/config.o 00:04:27.653 CC examples/accel/perf/accel_perf.o 00:04:27.653 CXX test/cpp_headers/cpuset.o 00:04:27.653 LINK sgl 00:04:27.653 LINK nvme_dp 00:04:27.653 LINK iscsi_fuzz 00:04:27.912 CC test/rpc_client/rpc_client_test.o 00:04:27.912 CXX test/cpp_headers/crc16.o 00:04:27.912 CC test/event/scheduler/scheduler.o 00:04:27.912 CC test/accel/dif/dif.o 00:04:27.912 CC examples/blob/hello_world/hello_blob.o 00:04:27.912 CC test/nvme/overhead/overhead.o 00:04:27.912 LINK rpc_client_test 00:04:27.912 LINK idxd_perf 00:04:28.171 CC examples/nvme/hello_world/hello_world.o 00:04:28.171 CXX test/cpp_headers/crc32.o 00:04:28.171 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:28.171 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:28.171 LINK accel_perf 00:04:28.171 LINK scheduler 00:04:28.171 LINK hello_blob 00:04:28.171 CXX test/cpp_headers/crc64.o 00:04:28.171 CC test/app/stub/stub.o 00:04:28.171 LINK hello_world 00:04:28.171 LINK overhead 00:04:28.430 CXX test/cpp_headers/dif.o 00:04:28.430 LINK dif 00:04:28.430 CC test/blobfs/mkfs/mkfs.o 00:04:28.430 LINK stub 00:04:28.430 CXX test/cpp_headers/dma.o 00:04:28.430 CC examples/blob/cli/blobcli.o 00:04:28.430 CC examples/nvme/reconnect/reconnect.o 00:04:28.430 CC test/nvme/err_injection/err_injection.o 00:04:28.430 LINK vhost_fuzz 00:04:28.688 CXX test/cpp_headers/endian.o 00:04:28.688 CXX test/cpp_headers/env_dpdk.o 00:04:28.688 LINK mkfs 00:04:28.688 CC examples/bdev/hello_world/hello_bdev.o 00:04:28.688 CC test/lvol/esnap/esnap.o 00:04:28.688 CXX test/cpp_headers/env.o 00:04:28.688 LINK err_injection 00:04:28.947 CC test/nvme/startup/startup.o 00:04:28.947 CXX test/cpp_headers/event.o 00:04:28.947 CC test/nvme/reserve/reserve.o 00:04:28.947 LINK reconnect 00:04:28.947 CC test/bdev/bdevio/bdevio.o 00:04:28.947 LINK hello_bdev 00:04:28.947 CXX test/cpp_headers/fd_group.o 00:04:28.947 CC test/nvme/simple_copy/simple_copy.o 00:04:28.947 LINK blobcli 00:04:29.206 LINK startup 00:04:29.206 CXX test/cpp_headers/fd.o 00:04:29.206 LINK reserve 00:04:29.206 CC examples/nvme/arbitration/arbitration.o 00:04:29.206 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:29.206 LINK simple_copy 00:04:29.206 CC examples/bdev/bdevperf/bdevperf.o 00:04:29.206 CXX test/cpp_headers/file.o 00:04:29.206 CXX test/cpp_headers/ftl.o 00:04:29.206 LINK bdevio 00:04:29.464 CC examples/nvme/hotplug/hotplug.o 00:04:29.464 CC test/nvme/connect_stress/connect_stress.o 00:04:29.464 CC test/nvme/boot_partition/boot_partition.o 00:04:29.464 CXX test/cpp_headers/gpt_spec.o 00:04:29.464 LINK arbitration 00:04:29.464 LINK connect_stress 00:04:29.464 CC test/nvme/compliance/nvme_compliance.o 00:04:29.722 CC test/nvme/fused_ordering/fused_ordering.o 00:04:29.722 LINK hotplug 00:04:29.722 LINK nvme_manage 00:04:29.722 LINK boot_partition 00:04:29.722 CXX test/cpp_headers/hexlify.o 00:04:29.723 CXX test/cpp_headers/histogram_data.o 00:04:29.723 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:29.723 LINK fused_ordering 00:04:29.723 CC examples/nvme/abort/abort.o 00:04:29.981 CXX test/cpp_headers/idxd.o 00:04:29.981 LINK nvme_compliance 00:04:29.981 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:29.981 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:29.981 CXX test/cpp_headers/idxd_spec.o 00:04:29.981 LINK cmb_copy 00:04:29.981 LINK bdevperf 00:04:29.981 CXX test/cpp_headers/init.o 00:04:29.981 CC test/nvme/fdp/fdp.o 00:04:30.240 CXX test/cpp_headers/ioat.o 00:04:30.240 CC test/nvme/cuse/cuse.o 00:04:30.240 LINK pmr_persistence 00:04:30.240 CXX test/cpp_headers/ioat_spec.o 00:04:30.240 LINK doorbell_aers 00:04:30.240 LINK abort 00:04:30.240 CXX test/cpp_headers/iscsi_spec.o 00:04:30.240 CXX test/cpp_headers/json.o 00:04:30.240 CXX test/cpp_headers/jsonrpc.o 00:04:30.240 CXX test/cpp_headers/keyring.o 00:04:30.240 CXX test/cpp_headers/keyring_module.o 00:04:30.240 CXX test/cpp_headers/likely.o 00:04:30.497 LINK fdp 00:04:30.497 CXX test/cpp_headers/log.o 00:04:30.497 CXX test/cpp_headers/lvol.o 00:04:30.497 CXX test/cpp_headers/memory.o 00:04:30.497 CXX test/cpp_headers/mmio.o 00:04:30.497 CXX test/cpp_headers/nbd.o 00:04:30.497 CXX test/cpp_headers/net.o 00:04:30.497 CXX test/cpp_headers/notify.o 00:04:30.497 CXX test/cpp_headers/nvme.o 00:04:30.497 CXX test/cpp_headers/nvme_intel.o 00:04:30.497 CXX test/cpp_headers/nvme_ocssd.o 00:04:30.497 CC examples/nvmf/nvmf/nvmf.o 00:04:30.755 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:30.755 CXX test/cpp_headers/nvme_spec.o 00:04:30.755 CXX test/cpp_headers/nvme_zns.o 00:04:30.755 CXX test/cpp_headers/nvmf_cmd.o 00:04:30.755 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:30.755 CXX test/cpp_headers/nvmf.o 00:04:30.755 CXX test/cpp_headers/nvmf_spec.o 00:04:30.755 CXX test/cpp_headers/nvmf_transport.o 00:04:30.755 CXX test/cpp_headers/opal.o 00:04:30.755 CXX test/cpp_headers/opal_spec.o 00:04:31.014 CXX test/cpp_headers/pci_ids.o 00:04:31.014 CXX test/cpp_headers/pipe.o 00:04:31.014 LINK nvmf 00:04:31.014 CXX test/cpp_headers/queue.o 00:04:31.014 CXX test/cpp_headers/reduce.o 00:04:31.014 CXX test/cpp_headers/rpc.o 00:04:31.014 CXX test/cpp_headers/scheduler.o 00:04:31.014 CXX test/cpp_headers/scsi.o 00:04:31.014 CXX test/cpp_headers/scsi_spec.o 00:04:31.014 CXX test/cpp_headers/sock.o 00:04:31.014 CXX test/cpp_headers/stdinc.o 00:04:31.014 CXX test/cpp_headers/string.o 00:04:31.014 CXX test/cpp_headers/thread.o 00:04:31.272 CXX test/cpp_headers/trace.o 00:04:31.272 CXX test/cpp_headers/trace_parser.o 00:04:31.272 CXX test/cpp_headers/tree.o 00:04:31.272 CXX test/cpp_headers/ublk.o 00:04:31.272 CXX test/cpp_headers/util.o 00:04:31.272 CXX test/cpp_headers/uuid.o 00:04:31.272 CXX test/cpp_headers/version.o 00:04:31.272 CXX test/cpp_headers/vfio_user_pci.o 00:04:31.272 CXX test/cpp_headers/vfio_user_spec.o 00:04:31.272 CXX test/cpp_headers/vhost.o 00:04:31.272 CXX test/cpp_headers/vmd.o 00:04:31.272 CXX test/cpp_headers/xor.o 00:04:31.272 CXX test/cpp_headers/zipf.o 00:04:31.531 LINK cuse 00:04:33.453 LINK esnap 00:04:33.734 00:04:33.734 real 0m55.458s 00:04:33.734 user 4m56.834s 00:04:33.734 sys 1m9.365s 00:04:33.734 20:45:44 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:33.734 ************************************ 00:04:33.734 END TEST make 00:04:33.734 ************************************ 00:04:33.734 20:45:44 make -- common/autotest_common.sh@10 -- $ set +x 00:04:33.734 20:45:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:33.734 20:45:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:33.734 20:45:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:33.734 20:45:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.734 20:45:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:33.734 20:45:44 -- pm/common@44 -- $ pid=6137 00:04:33.734 20:45:44 -- pm/common@50 -- $ kill -TERM 6137 00:04:33.734 20:45:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.734 20:45:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:33.734 20:45:44 -- pm/common@44 -- $ pid=6139 00:04:33.734 20:45:44 -- pm/common@50 -- $ kill -TERM 6139 00:04:33.734 20:45:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.734 20:45:44 -- nvmf/common.sh@7 -- # uname -s 00:04:33.734 20:45:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.734 20:45:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.734 20:45:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.734 20:45:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.734 20:45:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.734 20:45:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.734 20:45:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.734 20:45:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.734 20:45:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.734 20:45:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.734 20:45:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:04:33.734 20:45:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:04:33.734 20:45:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.734 20:45:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.734 20:45:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:33.734 20:45:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.734 20:45:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.734 20:45:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.734 20:45:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.734 20:45:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.993 20:45:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.993 20:45:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.993 20:45:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.993 20:45:44 -- paths/export.sh@5 -- # export PATH 00:04:33.994 20:45:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.994 20:45:44 -- nvmf/common.sh@47 -- # : 0 00:04:33.994 20:45:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.994 20:45:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.994 20:45:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.994 20:45:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.994 20:45:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.994 20:45:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.994 20:45:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.994 20:45:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.994 20:45:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:33.994 20:45:44 -- spdk/autotest.sh@32 -- # uname -s 00:04:33.994 20:45:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:33.994 20:45:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:33.994 20:45:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.994 20:45:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:33.994 20:45:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:33.994 20:45:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:33.994 20:45:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:33.994 20:45:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:33.994 20:45:44 -- spdk/autotest.sh@48 -- # udevadm_pid=65207 00:04:33.994 20:45:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:33.994 20:45:44 -- pm/common@17 -- # local monitor 00:04:33.994 20:45:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.994 20:45:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:33.994 20:45:44 -- pm/common@25 -- # sleep 1 00:04:33.994 20:45:44 -- pm/common@21 -- # date +%s 00:04:33.994 20:45:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:33.994 20:45:44 -- pm/common@21 -- # date +%s 00:04:33.994 20:45:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723409144 00:04:33.994 20:45:44 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1723409144 00:04:33.994 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723409144_collect-cpu-load.pm.log 00:04:33.994 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1723409144_collect-vmstat.pm.log 00:04:34.931 20:45:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:34.931 20:45:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:34.931 20:45:45 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:34.931 20:45:45 -- common/autotest_common.sh@10 -- # set +x 00:04:34.931 20:45:45 -- spdk/autotest.sh@59 -- # create_test_list 00:04:34.931 20:45:45 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:34.931 20:45:45 -- common/autotest_common.sh@10 -- # set +x 00:04:34.931 20:45:45 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:34.931 20:45:45 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:34.931 20:45:45 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:34.931 20:45:45 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:34.931 20:45:45 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:34.931 20:45:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:34.931 20:45:45 -- common/autotest_common.sh@1451 -- # uname 00:04:34.931 20:45:45 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:34.931 20:45:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:34.931 20:45:45 -- common/autotest_common.sh@1471 -- # uname 00:04:34.931 20:45:45 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:34.931 20:45:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:34.931 20:45:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:34.931 20:45:45 -- spdk/autotest.sh@72 -- # hash lcov 00:04:34.931 20:45:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:34.931 20:45:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:34.931 --rc lcov_branch_coverage=1 00:04:34.931 --rc lcov_function_coverage=1 00:04:34.931 --rc genhtml_branch_coverage=1 00:04:34.931 --rc genhtml_function_coverage=1 00:04:34.931 --rc genhtml_legend=1 00:04:34.931 --rc geninfo_all_blocks=1 00:04:34.931 ' 00:04:34.931 20:45:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:34.931 --rc lcov_branch_coverage=1 00:04:34.931 --rc lcov_function_coverage=1 00:04:34.931 --rc genhtml_branch_coverage=1 00:04:34.931 --rc genhtml_function_coverage=1 00:04:34.931 --rc genhtml_legend=1 00:04:34.931 --rc geninfo_all_blocks=1 00:04:34.931 ' 00:04:34.931 20:45:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:34.931 --rc lcov_branch_coverage=1 00:04:34.931 --rc lcov_function_coverage=1 00:04:34.931 --rc genhtml_branch_coverage=1 00:04:34.931 --rc genhtml_function_coverage=1 00:04:34.931 --rc genhtml_legend=1 00:04:34.931 --rc geninfo_all_blocks=1 00:04:34.931 --no-external' 00:04:34.931 20:45:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:34.931 --rc lcov_branch_coverage=1 00:04:34.931 --rc lcov_function_coverage=1 00:04:34.931 --rc genhtml_branch_coverage=1 00:04:34.931 --rc genhtml_function_coverage=1 00:04:34.931 --rc genhtml_legend=1 00:04:34.931 --rc geninfo_all_blocks=1 00:04:34.931 --no-external' 00:04:34.931 20:45:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:35.190 lcov: LCOV version 1.15 00:04:35.190 20:45:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:50.064 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:50.064 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:00.038 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:00.038 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:00.296 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:00.296 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:00.297 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:00.297 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:00.556 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:00.556 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:03.843 20:46:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:03.843 20:46:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:03.843 20:46:14 -- common/autotest_common.sh@10 -- # set +x 00:05:03.843 20:46:14 -- spdk/autotest.sh@91 -- # rm -f 00:05:03.843 20:46:14 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.360 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:04.360 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.360 20:46:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:04.360 20:46:14 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:04.360 20:46:14 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:04.360 20:46:14 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:04.360 20:46:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.360 20:46:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:04.360 20:46:14 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:04.360 20:46:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.360 20:46:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:04.360 20:46:14 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:04.360 20:46:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.360 20:46:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:04.360 20:46:14 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:04.360 20:46:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.360 20:46:14 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:04.360 20:46:14 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:04.360 20:46:14 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.360 20:46:14 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.360 20:46:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:04.360 20:46:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.360 20:46:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.360 20:46:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:04.360 20:46:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:04.360 20:46:14 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.360 No valid GPT data, bailing 00:05:04.360 20:46:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.360 20:46:15 -- scripts/common.sh@391 -- # pt= 00:05:04.360 20:46:15 -- scripts/common.sh@392 -- # return 1 00:05:04.360 20:46:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.360 1+0 records in 00:05:04.360 1+0 records out 00:05:04.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423814 s, 247 MB/s 00:05:04.360 20:46:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.360 20:46:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.360 20:46:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:04.360 20:46:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:04.360 20:46:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:04.360 No valid GPT data, bailing 00:05:04.360 20:46:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:04.360 20:46:15 -- scripts/common.sh@391 -- # pt= 00:05:04.360 20:46:15 -- scripts/common.sh@392 -- # return 1 00:05:04.360 20:46:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:04.360 1+0 records in 00:05:04.360 1+0 records out 00:05:04.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00439137 s, 239 MB/s 00:05:04.360 20:46:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.360 20:46:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.360 20:46:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:04.360 20:46:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:04.360 20:46:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:04.619 No valid GPT data, bailing 00:05:04.619 20:46:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:04.619 20:46:15 -- scripts/common.sh@391 -- # pt= 00:05:04.619 20:46:15 -- scripts/common.sh@392 -- # return 1 00:05:04.619 20:46:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:04.619 1+0 records in 00:05:04.619 1+0 records out 00:05:04.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045103 s, 232 MB/s 00:05:04.619 20:46:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.619 20:46:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.619 20:46:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:04.619 20:46:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:04.619 20:46:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:04.619 No valid GPT data, bailing 00:05:04.619 20:46:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:04.619 20:46:15 -- scripts/common.sh@391 -- # pt= 00:05:04.619 20:46:15 -- scripts/common.sh@392 -- # return 1 00:05:04.619 20:46:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:04.619 1+0 records in 00:05:04.619 1+0 records out 00:05:04.619 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412992 s, 254 MB/s 00:05:04.619 20:46:15 -- spdk/autotest.sh@118 -- # sync 00:05:04.619 20:46:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.619 20:46:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.619 20:46:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:06.522 20:46:17 -- spdk/autotest.sh@124 -- # uname -s 00:05:06.522 20:46:17 -- spdk/autotest.sh@124 -- # [[ Linux == Linux ]] 00:05:06.522 20:46:17 -- spdk/autotest.sh@124 -- # [[ 0 -eq 1 ]] 00:05:06.522 20:46:17 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:07.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.090 Hugepages 00:05:07.090 node hugesize free / total 00:05:07.090 node0 1048576kB 0 / 0 00:05:07.090 node0 2048kB 0 / 0 00:05:07.090 00:05:07.090 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:07.348 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:07.348 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:07.348 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:07.348 20:46:18 -- spdk/autotest.sh@130 -- # uname -s 00:05:07.348 20:46:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:07.348 20:46:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:07.348 20:46:18 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.173 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.173 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.173 20:46:18 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:09.146 20:46:19 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:09.146 20:46:19 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:09.146 20:46:19 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.146 20:46:19 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:09.146 20:46:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:09.146 20:46:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:09.146 20:46:19 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.146 20:46:19 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.146 20:46:19 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:09.431 20:46:19 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:09.431 20:46:19 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:09.431 20:46:19 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.689 Waiting for block devices as requested 00:05:09.689 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:09.947 20:46:20 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:09.947 20:46:20 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:09.947 20:46:20 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:09.947 20:46:20 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:09.947 20:46:20 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:05:09.947 20:46:20 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:09.947 20:46:20 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:09.947 20:46:20 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:09.947 20:46:20 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:09.947 20:46:20 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:09.947 20:46:20 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:09.947 20:46:20 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:09.947 20:46:20 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:09.947 20:46:20 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:09.947 20:46:20 -- common/autotest_common.sh@1553 -- # continue 00:05:09.947 20:46:20 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:09.947 20:46:20 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:09.947 20:46:20 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:09.947 20:46:20 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:05:09.947 20:46:20 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.948 20:46:20 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:09.948 20:46:20 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:09.948 20:46:20 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:09.948 20:46:20 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:09.948 20:46:20 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:09.948 20:46:20 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:09.948 20:46:20 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:09.948 20:46:20 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:09.948 20:46:20 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:09.948 20:46:20 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:09.948 20:46:20 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:09.948 20:46:20 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:09.948 20:46:20 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:09.948 20:46:20 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:09.948 20:46:20 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:09.948 20:46:20 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:09.948 20:46:20 -- common/autotest_common.sh@1553 -- # continue 00:05:09.948 20:46:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:09.948 20:46:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.948 20:46:20 -- common/autotest_common.sh@10 -- # set +x 00:05:09.948 20:46:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:09.948 20:46:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:09.948 20:46:20 -- common/autotest_common.sh@10 -- # set +x 00:05:09.948 20:46:20 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.514 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.773 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.773 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.773 20:46:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:10.773 20:46:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.773 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.773 20:46:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:10.773 20:46:21 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:10.773 20:46:21 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.773 20:46:21 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:10.773 20:46:21 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:10.773 20:46:21 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:10.773 20:46:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:10.773 20:46:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:10.773 20:46:21 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.773 20:46:21 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:10.773 20:46:21 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:10.773 20:46:21 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:05:10.773 20:46:21 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:10.773 20:46:21 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:10.773 20:46:21 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:10.773 20:46:21 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:10.773 20:46:21 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.773 20:46:21 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:10.773 20:46:21 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:10.773 20:46:21 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:10.773 20:46:21 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:10.773 20:46:21 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:10.773 20:46:21 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:10.773 20:46:21 -- common/autotest_common.sh@1589 -- # return 0 00:05:10.773 20:46:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:10.773 20:46:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:10.773 20:46:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.773 20:46:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:10.773 20:46:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:10.773 20:46:21 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:10.773 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.773 20:46:21 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:05:10.773 20:46:21 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:10.773 20:46:21 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:10.773 20:46:21 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:10.773 20:46:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.773 20:46:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.773 20:46:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.773 ************************************ 00:05:10.773 START TEST env 00:05:10.773 ************************************ 00:05:10.773 20:46:21 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:11.032 * Looking for test storage... 00:05:11.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:11.032 20:46:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:11.032 20:46:21 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.032 20:46:21 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.032 20:46:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.032 ************************************ 00:05:11.032 START TEST env_memory 00:05:11.032 ************************************ 00:05:11.032 20:46:21 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:11.032 00:05:11.032 00:05:11.032 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.032 http://cunit.sourceforge.net/ 00:05:11.032 00:05:11.032 00:05:11.032 Suite: memory 00:05:11.032 Test: alloc and free memory map ...[2024-08-11 20:46:21.670401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:11.032 passed 00:05:11.032 Test: mem map translation ...[2024-08-11 20:46:21.701381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:11.032 [2024-08-11 20:46:21.701427] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:11.032 [2024-08-11 20:46:21.701509] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:11.032 [2024-08-11 20:46:21.701521] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:11.032 passed 00:05:11.032 Test: mem map registration ...[2024-08-11 20:46:21.765257] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:11.032 [2024-08-11 20:46:21.765292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:11.032 passed 00:05:11.291 Test: mem map adjacent registrations ...passed 00:05:11.291 00:05:11.291 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.291 suites 1 1 n/a 0 0 00:05:11.291 tests 4 4 4 0 0 00:05:11.291 asserts 152 152 152 0 n/a 00:05:11.291 00:05:11.291 Elapsed time = 0.213 seconds 00:05:11.291 00:05:11.291 real 0m0.228s 00:05:11.291 user 0m0.211s 00:05:11.291 sys 0m0.014s 00:05:11.291 20:46:21 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.291 ************************************ 00:05:11.291 END TEST env_memory 00:05:11.291 ************************************ 00:05:11.291 20:46:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:11.291 20:46:21 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.291 20:46:21 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.291 20:46:21 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.291 20:46:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.291 ************************************ 00:05:11.291 START TEST env_vtophys 00:05:11.291 ************************************ 00:05:11.291 20:46:21 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:11.291 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:11.291 EAL: lib.eal log level changed from notice to debug 00:05:11.291 EAL: Detected lcore 0 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 1 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 2 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 3 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 4 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 5 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 6 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 7 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 8 as core 0 on socket 0 00:05:11.291 EAL: Detected lcore 9 as core 0 on socket 0 00:05:11.291 EAL: Maximum logical cores by configuration: 128 00:05:11.291 EAL: Detected CPU lcores: 10 00:05:11.291 EAL: Detected NUMA nodes: 1 00:05:11.291 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:11.291 EAL: Detected shared linkage of DPDK 00:05:11.291 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:11.291 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:11.291 EAL: Registered [vdev] bus. 00:05:11.291 EAL: bus.vdev log level changed from disabled to notice 00:05:11.291 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:11.291 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:11.291 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:11.292 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:11.292 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:11.292 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:11.292 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:11.292 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:11.292 EAL: No shared files mode enabled, IPC will be disabled 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Selected IOVA mode 'PA' 00:05:11.292 EAL: Probing VFIO support... 00:05:11.292 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.292 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:11.292 EAL: Ask a virtual area of 0x2e000 bytes 00:05:11.292 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:11.292 EAL: Setting up physically contiguous memory... 00:05:11.292 EAL: Setting maximum number of open files to 524288 00:05:11.292 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:11.292 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:11.292 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.292 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:11.292 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.292 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.292 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:11.292 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:11.292 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.292 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:11.292 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.292 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.292 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:11.292 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:11.292 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.292 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:11.292 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.292 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.292 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:11.292 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:11.292 EAL: Ask a virtual area of 0x61000 bytes 00:05:11.292 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:11.292 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:11.292 EAL: Ask a virtual area of 0x400000000 bytes 00:05:11.292 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:11.292 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:11.292 EAL: Hugepages will be freed exactly as allocated. 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: TSC frequency is ~2200000 KHz 00:05:11.292 EAL: Main lcore 0 is ready (tid=7fed576aaa00;cpuset=[0]) 00:05:11.292 EAL: Trying to obtain current memory policy. 00:05:11.292 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.292 EAL: Restoring previous memory policy: 0 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was expanded by 2MB 00:05:11.292 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:11.292 EAL: Mem event callback 'spdk:(nil)' registered 00:05:11.292 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:11.292 00:05:11.292 00:05:11.292 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.292 http://cunit.sourceforge.net/ 00:05:11.292 00:05:11.292 00:05:11.292 Suite: components_suite 00:05:11.292 Test: vtophys_malloc_test ...passed 00:05:11.292 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:11.292 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.292 EAL: Restoring previous memory policy: 4 00:05:11.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was expanded by 4MB 00:05:11.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was shrunk by 4MB 00:05:11.292 EAL: Trying to obtain current memory policy. 00:05:11.292 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.292 EAL: Restoring previous memory policy: 4 00:05:11.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was expanded by 6MB 00:05:11.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was shrunk by 6MB 00:05:11.292 EAL: Trying to obtain current memory policy. 00:05:11.292 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.292 EAL: Restoring previous memory policy: 4 00:05:11.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was expanded by 10MB 00:05:11.292 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.292 EAL: request: mp_malloc_sync 00:05:11.292 EAL: No shared files mode enabled, IPC is disabled 00:05:11.292 EAL: Heap on socket 0 was shrunk by 10MB 00:05:11.292 EAL: Trying to obtain current memory policy. 00:05:11.292 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.551 EAL: Restoring previous memory policy: 4 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was expanded by 18MB 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was shrunk by 18MB 00:05:11.551 EAL: Trying to obtain current memory policy. 00:05:11.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.551 EAL: Restoring previous memory policy: 4 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was expanded by 34MB 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was shrunk by 34MB 00:05:11.551 EAL: Trying to obtain current memory policy. 00:05:11.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.551 EAL: Restoring previous memory policy: 4 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was expanded by 66MB 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was shrunk by 66MB 00:05:11.551 EAL: Trying to obtain current memory policy. 00:05:11.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.551 EAL: Restoring previous memory policy: 4 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was expanded by 130MB 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was shrunk by 130MB 00:05:11.551 EAL: Trying to obtain current memory policy. 00:05:11.551 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.551 EAL: Restoring previous memory policy: 4 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.551 EAL: request: mp_malloc_sync 00:05:11.551 EAL: No shared files mode enabled, IPC is disabled 00:05:11.551 EAL: Heap on socket 0 was expanded by 258MB 00:05:11.551 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.810 EAL: request: mp_malloc_sync 00:05:11.810 EAL: No shared files mode enabled, IPC is disabled 00:05:11.810 EAL: Heap on socket 0 was shrunk by 258MB 00:05:11.810 EAL: Trying to obtain current memory policy. 00:05:11.810 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.810 EAL: Restoring previous memory policy: 4 00:05:11.810 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.810 EAL: request: mp_malloc_sync 00:05:11.810 EAL: No shared files mode enabled, IPC is disabled 00:05:11.810 EAL: Heap on socket 0 was expanded by 514MB 00:05:11.810 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.069 EAL: request: mp_malloc_sync 00:05:12.069 EAL: No shared files mode enabled, IPC is disabled 00:05:12.069 EAL: Heap on socket 0 was shrunk by 514MB 00:05:12.069 EAL: Trying to obtain current memory policy. 00:05:12.069 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:12.328 EAL: Restoring previous memory policy: 4 00:05:12.328 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.328 EAL: request: mp_malloc_sync 00:05:12.328 EAL: No shared files mode enabled, IPC is disabled 00:05:12.328 EAL: Heap on socket 0 was expanded by 1026MB 00:05:12.328 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.586 passed 00:05:12.586 00:05:12.586 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.586 suites 1 1 n/a 0 0 00:05:12.586 tests 2 2 2 0 0 00:05:12.586 asserts 5330 5330 5330 0 n/a 00:05:12.586 00:05:12.586 Elapsed time = 1.200 seconds 00:05:12.586 EAL: request: mp_malloc_sync 00:05:12.586 EAL: No shared files mode enabled, IPC is disabled 00:05:12.586 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.586 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.587 EAL: request: mp_malloc_sync 00:05:12.587 EAL: No shared files mode enabled, IPC is disabled 00:05:12.587 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.587 EAL: No shared files mode enabled, IPC is disabled 00:05:12.587 EAL: No shared files mode enabled, IPC is disabled 00:05:12.587 EAL: No shared files mode enabled, IPC is disabled 00:05:12.587 00:05:12.587 real 0m1.398s 00:05:12.587 user 0m0.759s 00:05:12.587 sys 0m0.504s 00:05:12.587 20:46:23 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.587 20:46:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.587 ************************************ 00:05:12.587 END TEST env_vtophys 00:05:12.587 ************************************ 00:05:12.587 20:46:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.587 20:46:23 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.587 20:46:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.587 20:46:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.587 ************************************ 00:05:12.587 START TEST env_pci 00:05:12.587 ************************************ 00:05:12.587 20:46:23 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.587 00:05:12.587 00:05:12.587 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.587 http://cunit.sourceforge.net/ 00:05:12.587 00:05:12.587 00:05:12.587 Suite: pci 00:05:12.587 Test: pci_hook ...[2024-08-11 20:46:23.361528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67491 has claimed it 00:05:12.844 passed 00:05:12.844 00:05:12.844 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.844 suites 1 1 n/a 0 0 00:05:12.844 tests 1 1 1 0 0 00:05:12.844 asserts 25 25 25 0 n/a 00:05:12.844 00:05:12.844 Elapsed time = 0.002 seconds 00:05:12.844 EAL: Cannot find device (10000:00:01.0) 00:05:12.844 EAL: Failed to attach device on primary process 00:05:12.844 00:05:12.844 real 0m0.017s 00:05:12.844 user 0m0.010s 00:05:12.844 sys 0m0.007s 00:05:12.844 20:46:23 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.844 20:46:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.844 ************************************ 00:05:12.844 END TEST env_pci 00:05:12.844 ************************************ 00:05:12.845 20:46:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.845 20:46:23 env -- env/env.sh@15 -- # uname 00:05:12.845 20:46:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.845 20:46:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.845 20:46:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.845 20:46:23 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:12.845 20:46:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.845 20:46:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.845 ************************************ 00:05:12.845 START TEST env_dpdk_post_init 00:05:12.845 ************************************ 00:05:12.845 20:46:23 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.845 EAL: Detected CPU lcores: 10 00:05:12.845 EAL: Detected NUMA nodes: 1 00:05:12.845 EAL: Detected shared linkage of DPDK 00:05:12.845 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.845 EAL: Selected IOVA mode 'PA' 00:05:12.845 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.845 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.845 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.845 Starting DPDK initialization... 00:05:12.845 Starting SPDK post initialization... 00:05:12.845 SPDK NVMe probe 00:05:12.845 Attaching to 0000:00:10.0 00:05:12.845 Attaching to 0000:00:11.0 00:05:12.845 Attached to 0000:00:10.0 00:05:12.845 Attached to 0000:00:11.0 00:05:12.845 Cleaning up... 00:05:12.845 00:05:12.845 real 0m0.162s 00:05:12.845 user 0m0.027s 00:05:12.845 sys 0m0.034s 00:05:12.845 20:46:23 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.845 20:46:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.845 ************************************ 00:05:12.845 END TEST env_dpdk_post_init 00:05:12.845 ************************************ 00:05:13.103 20:46:23 env -- env/env.sh@26 -- # uname 00:05:13.103 20:46:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.103 20:46:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.103 20:46:23 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.103 20:46:23 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.103 20:46:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.103 ************************************ 00:05:13.103 START TEST env_mem_callbacks 00:05:13.103 ************************************ 00:05:13.103 20:46:23 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.103 EAL: Detected CPU lcores: 10 00:05:13.103 EAL: Detected NUMA nodes: 1 00:05:13.103 EAL: Detected shared linkage of DPDK 00:05:13.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.103 EAL: Selected IOVA mode 'PA' 00:05:13.103 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.103 00:05:13.103 00:05:13.103 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.103 http://cunit.sourceforge.net/ 00:05:13.103 00:05:13.103 00:05:13.103 Suite: memory 00:05:13.103 Test: test ... 00:05:13.103 register 0x200000200000 2097152 00:05:13.103 malloc 3145728 00:05:13.103 register 0x200000400000 4194304 00:05:13.103 buf 0x200000500000 len 3145728 PASSED 00:05:13.103 malloc 64 00:05:13.103 buf 0x2000004fff40 len 64 PASSED 00:05:13.103 malloc 4194304 00:05:13.103 register 0x200000800000 6291456 00:05:13.103 buf 0x200000a00000 len 4194304 PASSED 00:05:13.103 free 0x200000500000 3145728 00:05:13.103 free 0x2000004fff40 64 00:05:13.103 unregister 0x200000400000 4194304 PASSED 00:05:13.103 free 0x200000a00000 4194304 00:05:13.103 unregister 0x200000800000 6291456 PASSED 00:05:13.103 malloc 8388608 00:05:13.103 register 0x200000400000 10485760 00:05:13.103 buf 0x200000600000 len 8388608 PASSED 00:05:13.103 free 0x200000600000 8388608 00:05:13.103 unregister 0x200000400000 10485760 PASSED 00:05:13.103 passed 00:05:13.103 00:05:13.103 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.103 suites 1 1 n/a 0 0 00:05:13.103 tests 1 1 1 0 0 00:05:13.103 asserts 15 15 15 0 n/a 00:05:13.104 00:05:13.104 Elapsed time = 0.008 seconds 00:05:13.104 00:05:13.104 real 0m0.140s 00:05:13.104 user 0m0.020s 00:05:13.104 sys 0m0.019s 00:05:13.104 20:46:23 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.104 ************************************ 00:05:13.104 END TEST env_mem_callbacks 00:05:13.104 ************************************ 00:05:13.104 20:46:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:13.104 00:05:13.104 real 0m2.281s 00:05:13.104 user 0m1.136s 00:05:13.104 sys 0m0.790s 00:05:13.104 20:46:23 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.104 20:46:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.104 ************************************ 00:05:13.104 END TEST env 00:05:13.104 ************************************ 00:05:13.104 20:46:23 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.104 20:46:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.104 20:46:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.104 20:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.104 ************************************ 00:05:13.104 START TEST rpc 00:05:13.104 ************************************ 00:05:13.104 20:46:23 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.363 * Looking for test storage... 00:05:13.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.363 20:46:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=67606 00:05:13.363 20:46:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.363 20:46:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.363 20:46:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 67606 00:05:13.363 20:46:23 rpc -- common/autotest_common.sh@827 -- # '[' -z 67606 ']' 00:05:13.363 20:46:23 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.363 20:46:23 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.363 20:46:23 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.363 20:46:23 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.363 20:46:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.363 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:13.363 [2024-08-11 20:46:24.007501] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:13.363 [2024-08-11 20:46:24.007615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67606 ] 00:05:13.621 [2024-08-11 20:46:24.146650] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.621 [2024-08-11 20:46:24.219076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.621 [2024-08-11 20:46:24.219141] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67606' to capture a snapshot of events at runtime. 00:05:13.621 [2024-08-11 20:46:24.219152] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.621 [2024-08-11 20:46:24.219159] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.621 [2024-08-11 20:46:24.219165] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67606 for offline analysis/debug. 00:05:13.621 [2024-08-11 20:46:24.219190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.621 [2024-08-11 20:46:24.271462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.881 20:46:24 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.881 20:46:24 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:13.881 20:46:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.881 20:46:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.881 20:46:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.881 20:46:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.881 20:46:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.881 20:46:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.881 20:46:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 ************************************ 00:05:13.881 START TEST rpc_integrity 00:05:13.881 ************************************ 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.881 { 00:05:13.881 "name": "Malloc0", 00:05:13.881 "aliases": [ 00:05:13.881 "3ff0def7-e552-4a2c-9c26-1831bd01dec5" 00:05:13.881 ], 00:05:13.881 "product_name": "Malloc disk", 00:05:13.881 "block_size": 512, 00:05:13.881 "num_blocks": 16384, 00:05:13.881 "uuid": "3ff0def7-e552-4a2c-9c26-1831bd01dec5", 00:05:13.881 "assigned_rate_limits": { 00:05:13.881 "rw_ios_per_sec": 0, 00:05:13.881 "rw_mbytes_per_sec": 0, 00:05:13.881 "r_mbytes_per_sec": 0, 00:05:13.881 "w_mbytes_per_sec": 0 00:05:13.881 }, 00:05:13.881 "claimed": false, 00:05:13.881 "zoned": false, 00:05:13.881 "supported_io_types": { 00:05:13.881 "read": true, 00:05:13.881 "write": true, 00:05:13.881 "unmap": true, 00:05:13.881 "flush": true, 00:05:13.881 "reset": true, 00:05:13.881 "nvme_admin": false, 00:05:13.881 "nvme_io": false, 00:05:13.881 "nvme_io_md": false, 00:05:13.881 "write_zeroes": true, 00:05:13.881 "zcopy": true, 00:05:13.881 "get_zone_info": false, 00:05:13.881 "zone_management": false, 00:05:13.881 "zone_append": false, 00:05:13.881 "compare": false, 00:05:13.881 "compare_and_write": false, 00:05:13.881 "abort": true, 00:05:13.881 "seek_hole": false, 00:05:13.881 "seek_data": false, 00:05:13.881 "copy": true, 00:05:13.881 "nvme_iov_md": false 00:05:13.881 }, 00:05:13.881 "memory_domains": [ 00:05:13.881 { 00:05:13.881 "dma_device_id": "system", 00:05:13.881 "dma_device_type": 1 00:05:13.881 }, 00:05:13.881 { 00:05:13.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.881 "dma_device_type": 2 00:05:13.881 } 00:05:13.881 ], 00:05:13.881 "driver_specific": {} 00:05:13.881 } 00:05:13.881 ]' 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 [2024-08-11 20:46:24.615277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.881 [2024-08-11 20:46:24.615328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.881 [2024-08-11 20:46:24.615342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x231d170 00:05:13.881 [2024-08-11 20:46:24.615351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.881 [2024-08-11 20:46:24.616862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.881 [2024-08-11 20:46:24.616892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.881 Passthru0 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.881 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:13.881 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.881 { 00:05:13.882 "name": "Malloc0", 00:05:13.882 "aliases": [ 00:05:13.882 "3ff0def7-e552-4a2c-9c26-1831bd01dec5" 00:05:13.882 ], 00:05:13.882 "product_name": "Malloc disk", 00:05:13.882 "block_size": 512, 00:05:13.882 "num_blocks": 16384, 00:05:13.882 "uuid": "3ff0def7-e552-4a2c-9c26-1831bd01dec5", 00:05:13.882 "assigned_rate_limits": { 00:05:13.882 "rw_ios_per_sec": 0, 00:05:13.882 "rw_mbytes_per_sec": 0, 00:05:13.882 "r_mbytes_per_sec": 0, 00:05:13.882 "w_mbytes_per_sec": 0 00:05:13.882 }, 00:05:13.882 "claimed": true, 00:05:13.882 "claim_type": "exclusive_write", 00:05:13.882 "zoned": false, 00:05:13.882 "supported_io_types": { 00:05:13.882 "read": true, 00:05:13.882 "write": true, 00:05:13.882 "unmap": true, 00:05:13.882 "flush": true, 00:05:13.882 "reset": true, 00:05:13.882 "nvme_admin": false, 00:05:13.882 "nvme_io": false, 00:05:13.882 "nvme_io_md": false, 00:05:13.882 "write_zeroes": true, 00:05:13.882 "zcopy": true, 00:05:13.882 "get_zone_info": false, 00:05:13.882 "zone_management": false, 00:05:13.882 "zone_append": false, 00:05:13.882 "compare": false, 00:05:13.882 "compare_and_write": false, 00:05:13.882 "abort": true, 00:05:13.882 "seek_hole": false, 00:05:13.882 "seek_data": false, 00:05:13.882 "copy": true, 00:05:13.882 "nvme_iov_md": false 00:05:13.882 }, 00:05:13.882 "memory_domains": [ 00:05:13.882 { 00:05:13.882 "dma_device_id": "system", 00:05:13.882 "dma_device_type": 1 00:05:13.882 }, 00:05:13.882 { 00:05:13.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.882 "dma_device_type": 2 00:05:13.882 } 00:05:13.882 ], 00:05:13.882 "driver_specific": {} 00:05:13.882 }, 00:05:13.882 { 00:05:13.882 "name": "Passthru0", 00:05:13.882 "aliases": [ 00:05:13.882 "806688bf-efdc-53f8-9a6e-b73e7e00aeb6" 00:05:13.882 ], 00:05:13.882 "product_name": "passthru", 00:05:13.882 "block_size": 512, 00:05:13.882 "num_blocks": 16384, 00:05:13.882 "uuid": "806688bf-efdc-53f8-9a6e-b73e7e00aeb6", 00:05:13.882 "assigned_rate_limits": { 00:05:13.882 "rw_ios_per_sec": 0, 00:05:13.882 "rw_mbytes_per_sec": 0, 00:05:13.882 "r_mbytes_per_sec": 0, 00:05:13.882 "w_mbytes_per_sec": 0 00:05:13.882 }, 00:05:13.882 "claimed": false, 00:05:13.882 "zoned": false, 00:05:13.882 "supported_io_types": { 00:05:13.882 "read": true, 00:05:13.882 "write": true, 00:05:13.882 "unmap": true, 00:05:13.882 "flush": true, 00:05:13.882 "reset": true, 00:05:13.882 "nvme_admin": false, 00:05:13.882 "nvme_io": false, 00:05:13.882 "nvme_io_md": false, 00:05:13.882 "write_zeroes": true, 00:05:13.882 "zcopy": true, 00:05:13.882 "get_zone_info": false, 00:05:13.882 "zone_management": false, 00:05:13.882 "zone_append": false, 00:05:13.882 "compare": false, 00:05:13.882 "compare_and_write": false, 00:05:13.882 "abort": true, 00:05:13.882 "seek_hole": false, 00:05:13.882 "seek_data": false, 00:05:13.882 "copy": true, 00:05:13.882 "nvme_iov_md": false 00:05:13.882 }, 00:05:13.882 "memory_domains": [ 00:05:13.882 { 00:05:13.882 "dma_device_id": "system", 00:05:13.882 "dma_device_type": 1 00:05:13.882 }, 00:05:13.882 { 00:05:13.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.882 "dma_device_type": 2 00:05:13.882 } 00:05:13.882 ], 00:05:13.882 "driver_specific": { 00:05:13.882 "passthru": { 00:05:13.882 "name": "Passthru0", 00:05:13.882 "base_bdev_name": "Malloc0" 00:05:13.882 } 00:05:13.882 } 00:05:13.882 } 00:05:13.882 ]' 00:05:13.882 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.141 20:46:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.141 00:05:14.141 real 0m0.331s 00:05:14.141 user 0m0.227s 00:05:14.141 sys 0m0.035s 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.141 20:46:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 ************************************ 00:05:14.141 END TEST rpc_integrity 00:05:14.141 ************************************ 00:05:14.141 20:46:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.141 20:46:24 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.141 20:46:24 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.141 20:46:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 ************************************ 00:05:14.141 START TEST rpc_plugins 00:05:14.141 ************************************ 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:14.141 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.141 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.141 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.141 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.141 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.141 { 00:05:14.141 "name": "Malloc1", 00:05:14.141 "aliases": [ 00:05:14.141 "103cb3c5-a0e2-40e9-b941-6f689a00cbf4" 00:05:14.141 ], 00:05:14.141 "product_name": "Malloc disk", 00:05:14.141 "block_size": 4096, 00:05:14.141 "num_blocks": 256, 00:05:14.141 "uuid": "103cb3c5-a0e2-40e9-b941-6f689a00cbf4", 00:05:14.141 "assigned_rate_limits": { 00:05:14.141 "rw_ios_per_sec": 0, 00:05:14.141 "rw_mbytes_per_sec": 0, 00:05:14.141 "r_mbytes_per_sec": 0, 00:05:14.141 "w_mbytes_per_sec": 0 00:05:14.141 }, 00:05:14.141 "claimed": false, 00:05:14.141 "zoned": false, 00:05:14.141 "supported_io_types": { 00:05:14.141 "read": true, 00:05:14.141 "write": true, 00:05:14.141 "unmap": true, 00:05:14.141 "flush": true, 00:05:14.141 "reset": true, 00:05:14.141 "nvme_admin": false, 00:05:14.141 "nvme_io": false, 00:05:14.141 "nvme_io_md": false, 00:05:14.141 "write_zeroes": true, 00:05:14.141 "zcopy": true, 00:05:14.141 "get_zone_info": false, 00:05:14.141 "zone_management": false, 00:05:14.141 "zone_append": false, 00:05:14.141 "compare": false, 00:05:14.141 "compare_and_write": false, 00:05:14.141 "abort": true, 00:05:14.141 "seek_hole": false, 00:05:14.141 "seek_data": false, 00:05:14.141 "copy": true, 00:05:14.141 "nvme_iov_md": false 00:05:14.141 }, 00:05:14.141 "memory_domains": [ 00:05:14.141 { 00:05:14.141 "dma_device_id": "system", 00:05:14.141 "dma_device_type": 1 00:05:14.141 }, 00:05:14.141 { 00:05:14.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.141 "dma_device_type": 2 00:05:14.141 } 00:05:14.141 ], 00:05:14.141 "driver_specific": {} 00:05:14.141 } 00:05:14.141 ]' 00:05:14.141 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:14.401 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.401 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.401 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.401 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:14.401 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:14.401 20:46:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:14.401 00:05:14.401 real 0m0.159s 00:05:14.401 user 0m0.107s 00:05:14.401 sys 0m0.016s 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.401 20:46:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 ************************************ 00:05:14.401 END TEST rpc_plugins 00:05:14.401 ************************************ 00:05:14.401 20:46:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:14.401 20:46:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.401 20:46:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.401 20:46:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 ************************************ 00:05:14.401 START TEST rpc_trace_cmd_test 00:05:14.401 ************************************ 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:14.401 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67606", 00:05:14.401 "tpoint_group_mask": "0x8", 00:05:14.401 "iscsi_conn": { 00:05:14.401 "mask": "0x2", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "scsi": { 00:05:14.401 "mask": "0x4", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "bdev": { 00:05:14.401 "mask": "0x8", 00:05:14.401 "tpoint_mask": "0xffffffffffffffff" 00:05:14.401 }, 00:05:14.401 "nvmf_rdma": { 00:05:14.401 "mask": "0x10", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "nvmf_tcp": { 00:05:14.401 "mask": "0x20", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "ftl": { 00:05:14.401 "mask": "0x40", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "blobfs": { 00:05:14.401 "mask": "0x80", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "dsa": { 00:05:14.401 "mask": "0x200", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "thread": { 00:05:14.401 "mask": "0x400", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "nvme_pcie": { 00:05:14.401 "mask": "0x800", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "iaa": { 00:05:14.401 "mask": "0x1000", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "nvme_tcp": { 00:05:14.401 "mask": "0x2000", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "bdev_nvme": { 00:05:14.401 "mask": "0x4000", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 }, 00:05:14.401 "sock": { 00:05:14.401 "mask": "0x8000", 00:05:14.401 "tpoint_mask": "0x0" 00:05:14.401 } 00:05:14.401 }' 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.401 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.660 00:05:14.660 real 0m0.277s 00:05:14.660 user 0m0.236s 00:05:14.660 sys 0m0.032s 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.660 20:46:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:14.660 ************************************ 00:05:14.660 END TEST rpc_trace_cmd_test 00:05:14.660 ************************************ 00:05:14.660 20:46:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.660 20:46:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.660 20:46:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.660 20:46:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.660 20:46:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.660 20:46:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.660 ************************************ 00:05:14.660 START TEST rpc_daemon_integrity 00:05:14.660 ************************************ 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.660 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.919 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.919 { 00:05:14.919 "name": "Malloc2", 00:05:14.919 "aliases": [ 00:05:14.919 "9510407e-4947-4d06-bb88-d5f34bfb3b5f" 00:05:14.919 ], 00:05:14.919 "product_name": "Malloc disk", 00:05:14.919 "block_size": 512, 00:05:14.919 "num_blocks": 16384, 00:05:14.919 "uuid": "9510407e-4947-4d06-bb88-d5f34bfb3b5f", 00:05:14.919 "assigned_rate_limits": { 00:05:14.919 "rw_ios_per_sec": 0, 00:05:14.919 "rw_mbytes_per_sec": 0, 00:05:14.919 "r_mbytes_per_sec": 0, 00:05:14.919 "w_mbytes_per_sec": 0 00:05:14.919 }, 00:05:14.919 "claimed": false, 00:05:14.919 "zoned": false, 00:05:14.919 "supported_io_types": { 00:05:14.919 "read": true, 00:05:14.919 "write": true, 00:05:14.919 "unmap": true, 00:05:14.919 "flush": true, 00:05:14.919 "reset": true, 00:05:14.919 "nvme_admin": false, 00:05:14.919 "nvme_io": false, 00:05:14.919 "nvme_io_md": false, 00:05:14.919 "write_zeroes": true, 00:05:14.919 "zcopy": true, 00:05:14.919 "get_zone_info": false, 00:05:14.919 "zone_management": false, 00:05:14.919 "zone_append": false, 00:05:14.919 "compare": false, 00:05:14.919 "compare_and_write": false, 00:05:14.919 "abort": true, 00:05:14.919 "seek_hole": false, 00:05:14.919 "seek_data": false, 00:05:14.919 "copy": true, 00:05:14.919 "nvme_iov_md": false 00:05:14.919 }, 00:05:14.919 "memory_domains": [ 00:05:14.919 { 00:05:14.919 "dma_device_id": "system", 00:05:14.919 "dma_device_type": 1 00:05:14.919 }, 00:05:14.919 { 00:05:14.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.919 "dma_device_type": 2 00:05:14.920 } 00:05:14.920 ], 00:05:14.920 "driver_specific": {} 00:05:14.920 } 00:05:14.920 ]' 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.920 [2024-08-11 20:46:25.527577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.920 [2024-08-11 20:46:25.527638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.920 [2024-08-11 20:46:25.527653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x230e9c0 00:05:14.920 [2024-08-11 20:46:25.527661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.920 [2024-08-11 20:46:25.528871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.920 [2024-08-11 20:46:25.528901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.920 Passthru0 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.920 { 00:05:14.920 "name": "Malloc2", 00:05:14.920 "aliases": [ 00:05:14.920 "9510407e-4947-4d06-bb88-d5f34bfb3b5f" 00:05:14.920 ], 00:05:14.920 "product_name": "Malloc disk", 00:05:14.920 "block_size": 512, 00:05:14.920 "num_blocks": 16384, 00:05:14.920 "uuid": "9510407e-4947-4d06-bb88-d5f34bfb3b5f", 00:05:14.920 "assigned_rate_limits": { 00:05:14.920 "rw_ios_per_sec": 0, 00:05:14.920 "rw_mbytes_per_sec": 0, 00:05:14.920 "r_mbytes_per_sec": 0, 00:05:14.920 "w_mbytes_per_sec": 0 00:05:14.920 }, 00:05:14.920 "claimed": true, 00:05:14.920 "claim_type": "exclusive_write", 00:05:14.920 "zoned": false, 00:05:14.920 "supported_io_types": { 00:05:14.920 "read": true, 00:05:14.920 "write": true, 00:05:14.920 "unmap": true, 00:05:14.920 "flush": true, 00:05:14.920 "reset": true, 00:05:14.920 "nvme_admin": false, 00:05:14.920 "nvme_io": false, 00:05:14.920 "nvme_io_md": false, 00:05:14.920 "write_zeroes": true, 00:05:14.920 "zcopy": true, 00:05:14.920 "get_zone_info": false, 00:05:14.920 "zone_management": false, 00:05:14.920 "zone_append": false, 00:05:14.920 "compare": false, 00:05:14.920 "compare_and_write": false, 00:05:14.920 "abort": true, 00:05:14.920 "seek_hole": false, 00:05:14.920 "seek_data": false, 00:05:14.920 "copy": true, 00:05:14.920 "nvme_iov_md": false 00:05:14.920 }, 00:05:14.920 "memory_domains": [ 00:05:14.920 { 00:05:14.920 "dma_device_id": "system", 00:05:14.920 "dma_device_type": 1 00:05:14.920 }, 00:05:14.920 { 00:05:14.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.920 "dma_device_type": 2 00:05:14.920 } 00:05:14.920 ], 00:05:14.920 "driver_specific": {} 00:05:14.920 }, 00:05:14.920 { 00:05:14.920 "name": "Passthru0", 00:05:14.920 "aliases": [ 00:05:14.920 "5c596228-53c3-5d91-9183-f1a337280a57" 00:05:14.920 ], 00:05:14.920 "product_name": "passthru", 00:05:14.920 "block_size": 512, 00:05:14.920 "num_blocks": 16384, 00:05:14.920 "uuid": "5c596228-53c3-5d91-9183-f1a337280a57", 00:05:14.920 "assigned_rate_limits": { 00:05:14.920 "rw_ios_per_sec": 0, 00:05:14.920 "rw_mbytes_per_sec": 0, 00:05:14.920 "r_mbytes_per_sec": 0, 00:05:14.920 "w_mbytes_per_sec": 0 00:05:14.920 }, 00:05:14.920 "claimed": false, 00:05:14.920 "zoned": false, 00:05:14.920 "supported_io_types": { 00:05:14.920 "read": true, 00:05:14.920 "write": true, 00:05:14.920 "unmap": true, 00:05:14.920 "flush": true, 00:05:14.920 "reset": true, 00:05:14.920 "nvme_admin": false, 00:05:14.920 "nvme_io": false, 00:05:14.920 "nvme_io_md": false, 00:05:14.920 "write_zeroes": true, 00:05:14.920 "zcopy": true, 00:05:14.920 "get_zone_info": false, 00:05:14.920 "zone_management": false, 00:05:14.920 "zone_append": false, 00:05:14.920 "compare": false, 00:05:14.920 "compare_and_write": false, 00:05:14.920 "abort": true, 00:05:14.920 "seek_hole": false, 00:05:14.920 "seek_data": false, 00:05:14.920 "copy": true, 00:05:14.920 "nvme_iov_md": false 00:05:14.920 }, 00:05:14.920 "memory_domains": [ 00:05:14.920 { 00:05:14.920 "dma_device_id": "system", 00:05:14.920 "dma_device_type": 1 00:05:14.920 }, 00:05:14.920 { 00:05:14.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.920 "dma_device_type": 2 00:05:14.920 } 00:05:14.920 ], 00:05:14.920 "driver_specific": { 00:05:14.920 "passthru": { 00:05:14.920 "name": "Passthru0", 00:05:14.920 "base_bdev_name": "Malloc2" 00:05:14.920 } 00:05:14.920 } 00:05:14.920 } 00:05:14.920 ]' 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.920 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.179 20:46:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.179 00:05:15.179 real 0m0.326s 00:05:15.179 user 0m0.219s 00:05:15.179 sys 0m0.039s 00:05:15.179 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.179 ************************************ 00:05:15.179 END TEST rpc_daemon_integrity 00:05:15.179 20:46:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.179 ************************************ 00:05:15.179 20:46:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.179 20:46:25 rpc -- rpc/rpc.sh@84 -- # killprocess 67606 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@946 -- # '[' -z 67606 ']' 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@950 -- # kill -0 67606 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@951 -- # uname 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67606 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:15.179 killing process with pid 67606 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67606' 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@965 -- # kill 67606 00:05:15.179 20:46:25 rpc -- common/autotest_common.sh@970 -- # wait 67606 00:05:15.438 00:05:15.438 real 0m2.255s 00:05:15.438 user 0m2.964s 00:05:15.438 sys 0m0.606s 00:05:15.438 20:46:26 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.438 20:46:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.438 ************************************ 00:05:15.438 END TEST rpc 00:05:15.438 ************************************ 00:05:15.438 20:46:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.438 20:46:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.438 20:46:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.438 20:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:15.438 ************************************ 00:05:15.438 START TEST skip_rpc 00:05:15.438 ************************************ 00:05:15.438 20:46:26 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:15.697 * Looking for test storage... 00:05:15.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:15.697 20:46:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.697 20:46:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:15.697 20:46:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:15.697 20:46:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.697 20:46:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.697 20:46:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.697 ************************************ 00:05:15.697 START TEST skip_rpc 00:05:15.697 ************************************ 00:05:15.697 20:46:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:15.697 20:46:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=67791 00:05:15.697 20:46:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:15.697 20:46:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.697 20:46:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:15.697 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:15.697 [2024-08-11 20:46:26.330552] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:15.697 [2024-08-11 20:46:26.330699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67791 ] 00:05:15.697 [2024-08-11 20:46:26.460190] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.956 [2024-08-11 20:46:26.518352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.956 [2024-08-11 20:46:26.569344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@646 -- # local es=0 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # rpc_cmd spdk_get_version 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # es=1 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 67791 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 67791 ']' 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 67791 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67791 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:21.228 killing process with pid 67791 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67791' 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 67791 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 67791 00:05:21.228 00:05:21.228 real 0m5.395s 00:05:21.228 user 0m5.033s 00:05:21.228 sys 0m0.285s 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.228 20:46:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.228 ************************************ 00:05:21.228 END TEST skip_rpc 00:05:21.228 ************************************ 00:05:21.228 20:46:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.228 20:46:31 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.228 20:46:31 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.228 20:46:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.228 ************************************ 00:05:21.228 START TEST skip_rpc_with_json 00:05:21.228 ************************************ 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=67878 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 67878 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 67878 ']' 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.228 20:46:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.228 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:21.228 [2024-08-11 20:46:31.770311] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:21.228 [2024-08-11 20:46:31.770437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67878 ] 00:05:21.228 [2024-08-11 20:46:31.901683] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.228 [2024-08-11 20:46:31.963941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.487 [2024-08-11 20:46:32.014394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.055 [2024-08-11 20:46:32.698949] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.055 request: 00:05:22.055 { 00:05:22.055 "trtype": "tcp", 00:05:22.055 "method": "nvmf_get_transports", 00:05:22.055 "req_id": 1 00:05:22.055 } 00:05:22.055 Got JSON-RPC error response 00:05:22.055 response: 00:05:22.055 { 00:05:22.055 "code": -19, 00:05:22.055 "message": "No such device" 00:05:22.055 } 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.055 [2024-08-11 20:46:32.711097] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:22.055 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.315 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:22.315 20:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.315 { 00:05:22.315 "subsystems": [ 00:05:22.315 { 00:05:22.315 "subsystem": "keyring", 00:05:22.315 "config": [] 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "subsystem": "iobuf", 00:05:22.315 "config": [ 00:05:22.315 { 00:05:22.315 "method": "iobuf_set_options", 00:05:22.315 "params": { 00:05:22.315 "small_pool_count": 8192, 00:05:22.315 "large_pool_count": 1024, 00:05:22.315 "small_bufsize": 8192, 00:05:22.315 "large_bufsize": 135168 00:05:22.315 } 00:05:22.315 } 00:05:22.315 ] 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "subsystem": "sock", 00:05:22.315 "config": [ 00:05:22.315 { 00:05:22.315 "method": "sock_set_default_impl", 00:05:22.315 "params": { 00:05:22.315 "impl_name": "uring" 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "sock_impl_set_options", 00:05:22.315 "params": { 00:05:22.315 "impl_name": "ssl", 00:05:22.315 "recv_buf_size": 4096, 00:05:22.315 "send_buf_size": 4096, 00:05:22.315 "enable_recv_pipe": true, 00:05:22.315 "enable_quickack": false, 00:05:22.315 "enable_placement_id": 0, 00:05:22.315 "enable_zerocopy_send_server": true, 00:05:22.315 "enable_zerocopy_send_client": false, 00:05:22.315 "zerocopy_threshold": 0, 00:05:22.315 "tls_version": 0, 00:05:22.315 "enable_ktls": false 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "sock_impl_set_options", 00:05:22.315 "params": { 00:05:22.315 "impl_name": "posix", 00:05:22.315 "recv_buf_size": 2097152, 00:05:22.315 "send_buf_size": 2097152, 00:05:22.315 "enable_recv_pipe": true, 00:05:22.315 "enable_quickack": false, 00:05:22.315 "enable_placement_id": 0, 00:05:22.315 "enable_zerocopy_send_server": true, 00:05:22.315 "enable_zerocopy_send_client": false, 00:05:22.315 "zerocopy_threshold": 0, 00:05:22.315 "tls_version": 0, 00:05:22.315 "enable_ktls": false 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "sock_impl_set_options", 00:05:22.315 "params": { 00:05:22.315 "impl_name": "uring", 00:05:22.315 "recv_buf_size": 2097152, 00:05:22.315 "send_buf_size": 2097152, 00:05:22.315 "enable_recv_pipe": true, 00:05:22.315 "enable_quickack": false, 00:05:22.315 "enable_placement_id": 0, 00:05:22.315 "enable_zerocopy_send_server": false, 00:05:22.315 "enable_zerocopy_send_client": false, 00:05:22.315 "zerocopy_threshold": 0, 00:05:22.315 "tls_version": 0, 00:05:22.315 "enable_ktls": false 00:05:22.315 } 00:05:22.315 } 00:05:22.315 ] 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "subsystem": "vmd", 00:05:22.315 "config": [] 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "subsystem": "accel", 00:05:22.315 "config": [ 00:05:22.315 { 00:05:22.315 "method": "accel_set_options", 00:05:22.315 "params": { 00:05:22.315 "small_cache_size": 128, 00:05:22.315 "large_cache_size": 16, 00:05:22.315 "task_count": 2048, 00:05:22.315 "sequence_count": 2048, 00:05:22.315 "buf_count": 2048 00:05:22.315 } 00:05:22.315 } 00:05:22.315 ] 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "subsystem": "bdev", 00:05:22.315 "config": [ 00:05:22.315 { 00:05:22.315 "method": "bdev_set_options", 00:05:22.315 "params": { 00:05:22.315 "bdev_io_pool_size": 65535, 00:05:22.315 "bdev_io_cache_size": 256, 00:05:22.315 "bdev_auto_examine": true, 00:05:22.315 "iobuf_small_cache_size": 128, 00:05:22.315 "iobuf_large_cache_size": 16 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "bdev_raid_set_options", 00:05:22.315 "params": { 00:05:22.315 "process_window_size_kb": 1024, 00:05:22.315 "process_max_bandwidth_mb_sec": 0 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "bdev_iscsi_set_options", 00:05:22.315 "params": { 00:05:22.315 "timeout_sec": 30 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "bdev_nvme_set_options", 00:05:22.315 "params": { 00:05:22.315 "action_on_timeout": "none", 00:05:22.315 "timeout_us": 0, 00:05:22.315 "timeout_admin_us": 0, 00:05:22.315 "keep_alive_timeout_ms": 10000, 00:05:22.315 "arbitration_burst": 0, 00:05:22.315 "low_priority_weight": 0, 00:05:22.315 "medium_priority_weight": 0, 00:05:22.315 "high_priority_weight": 0, 00:05:22.315 "nvme_adminq_poll_period_us": 10000, 00:05:22.315 "nvme_ioq_poll_period_us": 0, 00:05:22.315 "io_queue_requests": 0, 00:05:22.315 "delay_cmd_submit": true, 00:05:22.315 "transport_retry_count": 4, 00:05:22.315 "bdev_retry_count": 3, 00:05:22.315 "transport_ack_timeout": 0, 00:05:22.315 "ctrlr_loss_timeout_sec": 0, 00:05:22.315 "reconnect_delay_sec": 0, 00:05:22.315 "fast_io_fail_timeout_sec": 0, 00:05:22.315 "disable_auto_failback": false, 00:05:22.315 "generate_uuids": false, 00:05:22.315 "transport_tos": 0, 00:05:22.315 "nvme_error_stat": false, 00:05:22.315 "rdma_srq_size": 0, 00:05:22.315 "io_path_stat": false, 00:05:22.315 "allow_accel_sequence": false, 00:05:22.315 "rdma_max_cq_size": 0, 00:05:22.315 "rdma_cm_event_timeout_ms": 0, 00:05:22.315 "dhchap_digests": [ 00:05:22.315 "sha256", 00:05:22.315 "sha384", 00:05:22.315 "sha512" 00:05:22.315 ], 00:05:22.315 "dhchap_dhgroups": [ 00:05:22.315 "null", 00:05:22.315 "ffdhe2048", 00:05:22.315 "ffdhe3072", 00:05:22.315 "ffdhe4096", 00:05:22.315 "ffdhe6144", 00:05:22.315 "ffdhe8192" 00:05:22.315 ] 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "bdev_nvme_set_hotplug", 00:05:22.315 "params": { 00:05:22.315 "period_us": 100000, 00:05:22.315 "enable": false 00:05:22.315 } 00:05:22.315 }, 00:05:22.315 { 00:05:22.315 "method": "bdev_wait_for_examine" 00:05:22.315 } 00:05:22.315 ] 00:05:22.315 }, 00:05:22.316 { 00:05:22.316 "subsystem": "scsi", 00:05:22.316 "config": null 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "scheduler", 00:05:22.316 "config": [ 00:05:22.316 { 00:05:22.316 "method": "framework_set_scheduler", 00:05:22.316 "params": { 00:05:22.316 "name": "static" 00:05:22.316 } 00:05:22.316 } 00:05:22.316 ] 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "vhost_scsi", 00:05:22.316 "config": [] 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "vhost_blk", 00:05:22.316 "config": [] 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "ublk", 00:05:22.316 "config": [] 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "nbd", 00:05:22.316 "config": [] 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "nvmf", 00:05:22.316 "config": [ 00:05:22.316 { 00:05:22.316 "method": "nvmf_set_config", 00:05:22.316 "params": { 00:05:22.316 "discovery_filter": "match_any", 00:05:22.316 "admin_cmd_passthru": { 00:05:22.316 "identify_ctrlr": false 00:05:22.316 } 00:05:22.316 } 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "method": "nvmf_set_max_subsystems", 00:05:22.316 "params": { 00:05:22.316 "max_subsystems": 1024 00:05:22.316 } 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "method": "nvmf_set_crdt", 00:05:22.316 "params": { 00:05:22.316 "crdt1": 0, 00:05:22.316 "crdt2": 0, 00:05:22.316 "crdt3": 0 00:05:22.316 } 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "method": "nvmf_create_transport", 00:05:22.316 "params": { 00:05:22.316 "trtype": "TCP", 00:05:22.316 "max_queue_depth": 128, 00:05:22.316 "max_io_qpairs_per_ctrlr": 127, 00:05:22.316 "in_capsule_data_size": 4096, 00:05:22.316 "max_io_size": 131072, 00:05:22.316 "io_unit_size": 131072, 00:05:22.316 "max_aq_depth": 128, 00:05:22.316 "num_shared_buffers": 511, 00:05:22.316 "buf_cache_size": 4294967295, 00:05:22.316 "dif_insert_or_strip": false, 00:05:22.316 "zcopy": false, 00:05:22.316 "c2h_success": true, 00:05:22.316 "sock_priority": 0, 00:05:22.316 "abort_timeout_sec": 1, 00:05:22.316 "ack_timeout": 0, 00:05:22.316 "data_wr_pool_size": 0 00:05:22.316 } 00:05:22.316 } 00:05:22.316 ] 00:05:22.316 }, 00:05:22.316 { 00:05:22.316 "subsystem": "iscsi", 00:05:22.316 "config": [ 00:05:22.316 { 00:05:22.316 "method": "iscsi_set_options", 00:05:22.316 "params": { 00:05:22.316 "node_base": "iqn.2016-06.io.spdk", 00:05:22.316 "max_sessions": 128, 00:05:22.316 "max_connections_per_session": 2, 00:05:22.316 "max_queue_depth": 64, 00:05:22.316 "default_time2wait": 2, 00:05:22.316 "default_time2retain": 20, 00:05:22.316 "first_burst_length": 8192, 00:05:22.316 "immediate_data": true, 00:05:22.316 "allow_duplicated_isid": false, 00:05:22.316 "error_recovery_level": 0, 00:05:22.316 "nop_timeout": 60, 00:05:22.316 "nop_in_interval": 30, 00:05:22.316 "disable_chap": false, 00:05:22.316 "require_chap": false, 00:05:22.316 "mutual_chap": false, 00:05:22.316 "chap_group": 0, 00:05:22.316 "max_large_datain_per_connection": 64, 00:05:22.316 "max_r2t_per_connection": 4, 00:05:22.316 "pdu_pool_size": 36864, 00:05:22.316 "immediate_data_pool_size": 16384, 00:05:22.316 "data_out_pool_size": 2048 00:05:22.316 } 00:05:22.316 } 00:05:22.316 ] 00:05:22.316 } 00:05:22.316 ] 00:05:22.316 } 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 67878 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 67878 ']' 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 67878 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67878 00:05:22.316 killing process with pid 67878 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67878' 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 67878 00:05:22.316 20:46:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 67878 00:05:22.580 20:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.580 20:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=67905 00:05:22.580 20:46:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 67905 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 67905 ']' 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 67905 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67905 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.881 killing process with pid 67905 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67905' 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 67905 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 67905 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.881 00:05:27.881 real 0m6.935s 00:05:27.881 user 0m6.687s 00:05:27.881 sys 0m0.597s 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.881 20:46:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:27.881 ************************************ 00:05:27.881 END TEST skip_rpc_with_json 00:05:27.881 ************************************ 00:05:28.140 20:46:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:28.140 20:46:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.140 20:46:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.140 20:46:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.140 ************************************ 00:05:28.140 START TEST skip_rpc_with_delay 00:05:28.140 ************************************ 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # local es=0 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.140 [2024-08-11 20:46:38.741330] app.c: 833:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.140 [2024-08-11 20:46:38.741434] app.c: 712:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # es=1 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:28.140 00:05:28.140 real 0m0.062s 00:05:28.140 user 0m0.047s 00:05:28.140 sys 0m0.015s 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.140 20:46:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:28.140 ************************************ 00:05:28.140 END TEST skip_rpc_with_delay 00:05:28.140 ************************************ 00:05:28.140 20:46:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:28.140 20:46:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:28.140 20:46:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:28.140 20:46:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.140 20:46:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.140 20:46:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.140 ************************************ 00:05:28.140 START TEST exit_on_failed_rpc_init 00:05:28.140 ************************************ 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=68009 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 68009 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 68009 ']' 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.140 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.141 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.141 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.141 20:46:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.141 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:28.141 [2024-08-11 20:46:38.873835] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:28.141 [2024-08-11 20:46:38.873932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68009 ] 00:05:28.399 [2024-08-11 20:46:39.010063] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.399 [2024-08-11 20:46:39.072724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.399 [2024-08-11 20:46:39.123487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # local es=0 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:28.657 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:28.657 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:28.657 [2024-08-11 20:46:39.382016] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:28.657 [2024-08-11 20:46:39.382138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68024 ] 00:05:28.916 [2024-08-11 20:46:39.522103] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.916 [2024-08-11 20:46:39.585781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.916 [2024-08-11 20:46:39.585941] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:28.916 [2024-08-11 20:46:39.585959] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:28.916 [2024-08-11 20:46:39.585969] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # es=234 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@658 -- # es=106 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # case "$es" in 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@666 -- # es=1 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 68009 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 68009 ']' 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 68009 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.916 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68009 00:05:29.174 killing process with pid 68009 00:05:29.174 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:29.174 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:29.174 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68009' 00:05:29.174 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 68009 00:05:29.174 20:46:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 68009 00:05:29.433 00:05:29.433 real 0m1.269s 00:05:29.433 user 0m1.354s 00:05:29.433 sys 0m0.400s 00:05:29.433 20:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.433 ************************************ 00:05:29.433 END TEST exit_on_failed_rpc_init 00:05:29.433 ************************************ 00:05:29.433 20:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 20:46:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.433 00:05:29.433 real 0m13.959s 00:05:29.433 user 0m13.222s 00:05:29.433 sys 0m1.477s 00:05:29.433 20:46:40 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.433 ************************************ 00:05:29.433 END TEST skip_rpc 00:05:29.433 ************************************ 00:05:29.433 20:46:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 20:46:40 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.433 20:46:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.433 20:46:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.433 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.433 ************************************ 00:05:29.433 START TEST rpc_client 00:05:29.433 ************************************ 00:05:29.433 20:46:40 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:29.692 * Looking for test storage... 00:05:29.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:29.692 20:46:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:29.692 OK 00:05:29.692 20:46:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:29.692 00:05:29.692 real 0m0.109s 00:05:29.692 user 0m0.047s 00:05:29.692 sys 0m0.068s 00:05:29.692 20:46:40 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:29.692 20:46:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:29.692 ************************************ 00:05:29.692 END TEST rpc_client 00:05:29.692 ************************************ 00:05:29.692 20:46:40 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.692 20:46:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:29.692 20:46:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:29.692 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.692 ************************************ 00:05:29.692 START TEST json_config 00:05:29.692 ************************************ 00:05:29.692 20:46:40 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:29.692 20:46:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:29.692 20:46:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:29.693 20:46:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.693 20:46:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.693 20:46:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.693 20:46:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.693 20:46:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.693 20:46:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.693 20:46:40 json_config -- paths/export.sh@5 -- # export PATH 00:05:29.693 20:46:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@47 -- # : 0 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:29.693 20:46:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.693 INFO: JSON configuration test init 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.693 20:46:40 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:29.693 20:46:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.693 20:46:40 json_config -- json_config/common.sh@10 -- # shift 00:05:29.693 20:46:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.693 20:46:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.693 20:46:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.693 20:46:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.693 20:46:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.693 20:46:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=68143 00:05:29.693 Waiting for target to run... 00:05:29.693 20:46:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.693 20:46:40 json_config -- json_config/common.sh@25 -- # waitforlisten 68143 /var/tmp/spdk_tgt.sock 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@827 -- # '[' -z 68143 ']' 00:05:29.693 20:46:40 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:29.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.693 20:46:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.952 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:29.952 [2024-08-11 20:46:40.502381] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:29.952 [2024-08-11 20:46:40.502513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68143 ] 00:05:30.210 [2024-08-11 20:46:40.929230] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.469 [2024-08-11 20:46:40.995199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.036 20:46:41 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:31.037 20:46:41 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:31.037 00:05:31.037 20:46:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:31.037 20:46:41 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:31.037 20:46:41 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:31.037 20:46:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.037 20:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.037 20:46:41 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:31.037 20:46:41 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:31.037 20:46:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.037 20:46:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.037 20:46:41 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:31.037 20:46:41 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:31.037 20:46:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:31.296 [2024-08-11 20:46:41.850872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:31.296 20:46:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.296 20:46:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:31.296 20:46:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:31.296 20:46:42 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:31.554 20:46:42 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:31.554 20:46:42 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:31.554 20:46:42 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:31.554 20:46:42 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:31.554 20:46:42 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@51 -- # sort 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:31.813 20:46:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.813 20:46:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:31.813 20:46:42 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:31.813 20:46:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:31.813 20:46:42 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:31.813 20:46:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.071 MallocForNvmf0 00:05:32.071 20:46:42 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.071 20:46:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.330 MallocForNvmf1 00:05:32.330 20:46:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.330 20:46:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:32.589 [2024-08-11 20:46:43.130947] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.589 20:46:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.589 20:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:32.589 20:46:43 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.590 20:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:32.849 20:46:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:32.849 20:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.108 20:46:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.108 20:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.367 [2024-08-11 20:46:43.951412] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.367 20:46:43 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:33.367 20:46:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.367 20:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.367 20:46:44 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:33.367 20:46:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.367 20:46:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.367 20:46:44 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:33.367 20:46:44 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.367 20:46:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:33.627 MallocBdevForConfigChangeCheck 00:05:33.627 20:46:44 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:33.627 20:46:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.627 20:46:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.627 20:46:44 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:33.627 20:46:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.195 INFO: shutting down applications... 00:05:34.195 20:46:44 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:34.195 20:46:44 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:34.195 20:46:44 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:34.195 20:46:44 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:34.195 20:46:44 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:34.454 Calling clear_iscsi_subsystem 00:05:34.454 Calling clear_nvmf_subsystem 00:05:34.454 Calling clear_nbd_subsystem 00:05:34.454 Calling clear_ublk_subsystem 00:05:34.454 Calling clear_vhost_blk_subsystem 00:05:34.454 Calling clear_vhost_scsi_subsystem 00:05:34.454 Calling clear_bdev_subsystem 00:05:34.454 20:46:45 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:34.454 20:46:45 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:34.454 20:46:45 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:34.454 20:46:45 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.454 20:46:45 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:34.454 20:46:45 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:35.022 20:46:45 json_config -- json_config/json_config.sh@349 -- # break 00:05:35.022 20:46:45 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:35.022 20:46:45 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:35.022 20:46:45 json_config -- json_config/common.sh@31 -- # local app=target 00:05:35.022 20:46:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.022 20:46:45 json_config -- json_config/common.sh@35 -- # [[ -n 68143 ]] 00:05:35.022 20:46:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 68143 00:05:35.022 20:46:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.022 20:46:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.022 20:46:45 json_config -- json_config/common.sh@41 -- # kill -0 68143 00:05:35.022 20:46:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.281 20:46:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.281 20:46:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.281 20:46:46 json_config -- json_config/common.sh@41 -- # kill -0 68143 00:05:35.281 20:46:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.281 20:46:46 json_config -- json_config/common.sh@43 -- # break 00:05:35.281 SPDK target shutdown done 00:05:35.281 20:46:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.281 20:46:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.281 INFO: relaunching applications... 00:05:35.281 20:46:46 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:35.281 20:46:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.281 20:46:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:35.281 20:46:46 json_config -- json_config/common.sh@10 -- # shift 00:05:35.281 20:46:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.281 20:46:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.281 20:46:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.281 20:46:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.281 20:46:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.281 20:46:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=68339 00:05:35.281 Waiting for target to run... 00:05:35.281 20:46:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:35.281 20:46:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.281 20:46:46 json_config -- json_config/common.sh@25 -- # waitforlisten 68339 /var/tmp/spdk_tgt.sock 00:05:35.281 20:46:46 json_config -- common/autotest_common.sh@827 -- # '[' -z 68339 ']' 00:05:35.281 20:46:46 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.281 20:46:46 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.281 20:46:46 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.281 20:46:46 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.281 20:46:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.540 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:35.540 [2024-08-11 20:46:46.099189] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:35.540 [2024-08-11 20:46:46.099900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68339 ] 00:05:35.810 [2024-08-11 20:46:46.546234] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.086 [2024-08-11 20:46:46.606006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.086 [2024-08-11 20:46:46.731077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.345 [2024-08-11 20:46:46.926062] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.345 [2024-08-11 20:46:46.958166] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:36.345 20:46:47 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.345 00:05:36.345 20:46:47 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:36.345 20:46:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:36.345 20:46:47 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:36.345 INFO: Checking if target configuration is the same... 00:05:36.345 20:46:47 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:36.346 20:46:47 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.346 20:46:47 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:36.346 20:46:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.346 + '[' 2 -ne 2 ']' 00:05:36.346 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:36.346 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:36.346 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:36.346 +++ basename /dev/fd/62 00:05:36.346 ++ mktemp /tmp/62.XXX 00:05:36.605 + tmp_file_1=/tmp/62.HUA 00:05:36.605 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:36.605 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:36.605 + tmp_file_2=/tmp/spdk_tgt_config.json.H4V 00:05:36.605 + ret=0 00:05:36.605 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:36.864 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:36.864 + diff -u /tmp/62.HUA /tmp/spdk_tgt_config.json.H4V 00:05:36.864 INFO: JSON config files are the same 00:05:36.864 + echo 'INFO: JSON config files are the same' 00:05:36.864 + rm /tmp/62.HUA /tmp/spdk_tgt_config.json.H4V 00:05:36.864 + exit 0 00:05:36.864 20:46:47 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:36.864 INFO: changing configuration and checking if this can be detected... 00:05:36.864 20:46:47 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:36.864 20:46:47 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:36.864 20:46:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:37.124 20:46:47 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.124 20:46:47 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:37.124 20:46:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.124 + '[' 2 -ne 2 ']' 00:05:37.124 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:37.124 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:37.124 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:37.124 +++ basename /dev/fd/62 00:05:37.124 ++ mktemp /tmp/62.XXX 00:05:37.124 + tmp_file_1=/tmp/62.2oi 00:05:37.124 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.124 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.124 + tmp_file_2=/tmp/spdk_tgt_config.json.LK1 00:05:37.124 + ret=0 00:05:37.124 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:37.691 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:37.691 + diff -u /tmp/62.2oi /tmp/spdk_tgt_config.json.LK1 00:05:37.691 + ret=1 00:05:37.691 + echo '=== Start of file: /tmp/62.2oi ===' 00:05:37.691 + cat /tmp/62.2oi 00:05:37.691 + echo '=== End of file: /tmp/62.2oi ===' 00:05:37.691 + echo '' 00:05:37.691 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LK1 ===' 00:05:37.691 + cat /tmp/spdk_tgt_config.json.LK1 00:05:37.691 + echo '=== End of file: /tmp/spdk_tgt_config.json.LK1 ===' 00:05:37.691 + echo '' 00:05:37.691 + rm /tmp/62.2oi /tmp/spdk_tgt_config.json.LK1 00:05:37.691 + exit 1 00:05:37.691 INFO: configuration change detected. 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@321 -- # [[ -n 68339 ]] 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.691 20:46:48 json_config -- json_config/json_config.sh@327 -- # killprocess 68339 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@946 -- # '[' -z 68339 ']' 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@950 -- # kill -0 68339 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@951 -- # uname 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68339 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:37.691 killing process with pid 68339 00:05:37.691 20:46:48 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:37.692 20:46:48 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68339' 00:05:37.692 20:46:48 json_config -- common/autotest_common.sh@965 -- # kill 68339 00:05:37.692 20:46:48 json_config -- common/autotest_common.sh@970 -- # wait 68339 00:05:37.950 20:46:48 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.950 20:46:48 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:37.951 20:46:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.951 20:46:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.951 20:46:48 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:37.951 INFO: Success 00:05:37.951 20:46:48 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:37.951 00:05:37.951 real 0m8.257s 00:05:37.951 user 0m11.817s 00:05:37.951 sys 0m1.704s 00:05:37.951 20:46:48 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.951 20:46:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.951 ************************************ 00:05:37.951 END TEST json_config 00:05:37.951 ************************************ 00:05:37.951 20:46:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:37.951 20:46:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.951 20:46:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.951 20:46:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.951 ************************************ 00:05:37.951 START TEST json_config_extra_key 00:05:37.951 ************************************ 00:05:37.951 20:46:48 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:37.951 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.951 20:46:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.211 20:46:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.211 20:46:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.211 20:46:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.211 20:46:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.211 20:46:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.211 20:46:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.211 20:46:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:38.211 20:46:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:38.212 20:46:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.212 INFO: launching applications... 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:38.212 20:46:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.212 Waiting for target to run... 00:05:38.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=68479 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 68479 /var/tmp/spdk_tgt.sock 00:05:38.212 20:46:48 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 68479 ']' 00:05:38.212 20:46:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:38.212 20:46:48 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.212 20:46:48 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:38.212 20:46:48 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.212 20:46:48 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:38.212 20:46:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.212 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:38.212 [2024-08-11 20:46:48.803818] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:38.212 [2024-08-11 20:46:48.803921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68479 ] 00:05:38.472 [2024-08-11 20:46:49.231872] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.731 [2024-08-11 20:46:49.283181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.731 [2024-08-11 20:46:49.303130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.298 20:46:49 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:39.298 20:46:49 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:39.298 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:39.298 INFO: shutting down applications... 00:05:39.298 20:46:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:39.298 20:46:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 68479 ]] 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 68479 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 68479 00:05:39.298 20:46:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 68479 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.558 SPDK target shutdown done 00:05:39.558 20:46:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.558 Success 00:05:39.558 20:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.558 00:05:39.558 real 0m1.656s 00:05:39.558 user 0m1.563s 00:05:39.558 sys 0m0.441s 00:05:39.558 20:46:50 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.558 20:46:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.558 ************************************ 00:05:39.558 END TEST json_config_extra_key 00:05:39.558 ************************************ 00:05:39.817 20:46:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.817 20:46:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:39.817 20:46:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.817 20:46:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.817 ************************************ 00:05:39.817 START TEST alias_rpc 00:05:39.817 ************************************ 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.817 * Looking for test storage... 00:05:39.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:39.817 20:46:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.817 20:46:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68544 00:05:39.817 20:46:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68544 00:05:39.817 20:46:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 68544 ']' 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:39.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:39.817 20:46:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.817 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:39.817 [2024-08-11 20:46:50.513155] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:39.818 [2024-08-11 20:46:50.513259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68544 ] 00:05:40.076 [2024-08-11 20:46:50.648511] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.076 [2024-08-11 20:46:50.705964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.076 [2024-08-11 20:46:50.762346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:41.012 20:46:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:41.012 20:46:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68544 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 68544 ']' 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 68544 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68544 00:05:41.012 killing process with pid 68544 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68544' 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@965 -- # kill 68544 00:05:41.012 20:46:51 alias_rpc -- common/autotest_common.sh@970 -- # wait 68544 00:05:41.271 ************************************ 00:05:41.271 END TEST alias_rpc 00:05:41.271 ************************************ 00:05:41.271 00:05:41.271 real 0m1.645s 00:05:41.271 user 0m1.810s 00:05:41.271 sys 0m0.404s 00:05:41.271 20:46:52 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.271 20:46:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.530 20:46:52 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:41.531 20:46:52 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.531 20:46:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.531 20:46:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.531 20:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.531 ************************************ 00:05:41.531 START TEST spdkcli_tcp 00:05:41.531 ************************************ 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.531 * Looking for test storage... 00:05:41.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=68619 00:05:41.531 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 68619 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 68619 ']' 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.531 20:46:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.531 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:41.531 [2024-08-11 20:46:52.208393] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:41.531 [2024-08-11 20:46:52.208493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68619 ] 00:05:41.790 [2024-08-11 20:46:52.346303] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.790 [2024-08-11 20:46:52.406970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.790 [2024-08-11 20:46:52.406978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.790 [2024-08-11 20:46:52.462996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.048 20:46:52 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.048 20:46:52 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:42.048 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=68624 00:05:42.049 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.049 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.308 [ 00:05:42.308 "bdev_malloc_delete", 00:05:42.308 "bdev_malloc_create", 00:05:42.308 "bdev_null_resize", 00:05:42.308 "bdev_null_delete", 00:05:42.308 "bdev_null_create", 00:05:42.308 "bdev_nvme_cuse_unregister", 00:05:42.308 "bdev_nvme_cuse_register", 00:05:42.308 "bdev_opal_new_user", 00:05:42.308 "bdev_opal_set_lock_state", 00:05:42.308 "bdev_opal_delete", 00:05:42.308 "bdev_opal_get_info", 00:05:42.308 "bdev_opal_create", 00:05:42.308 "bdev_nvme_opal_revert", 00:05:42.308 "bdev_nvme_opal_init", 00:05:42.308 "bdev_nvme_send_cmd", 00:05:42.308 "bdev_nvme_get_path_iostat", 00:05:42.308 "bdev_nvme_get_mdns_discovery_info", 00:05:42.308 "bdev_nvme_stop_mdns_discovery", 00:05:42.308 "bdev_nvme_start_mdns_discovery", 00:05:42.308 "bdev_nvme_set_multipath_policy", 00:05:42.308 "bdev_nvme_set_preferred_path", 00:05:42.308 "bdev_nvme_get_io_paths", 00:05:42.308 "bdev_nvme_remove_error_injection", 00:05:42.308 "bdev_nvme_add_error_injection", 00:05:42.308 "bdev_nvme_get_discovery_info", 00:05:42.308 "bdev_nvme_stop_discovery", 00:05:42.308 "bdev_nvme_start_discovery", 00:05:42.308 "bdev_nvme_get_controller_health_info", 00:05:42.308 "bdev_nvme_disable_controller", 00:05:42.308 "bdev_nvme_enable_controller", 00:05:42.308 "bdev_nvme_reset_controller", 00:05:42.308 "bdev_nvme_get_transport_statistics", 00:05:42.308 "bdev_nvme_apply_firmware", 00:05:42.308 "bdev_nvme_detach_controller", 00:05:42.308 "bdev_nvme_get_controllers", 00:05:42.308 "bdev_nvme_attach_controller", 00:05:42.308 "bdev_nvme_set_hotplug", 00:05:42.308 "bdev_nvme_set_options", 00:05:42.308 "bdev_passthru_delete", 00:05:42.308 "bdev_passthru_create", 00:05:42.308 "bdev_lvol_set_parent_bdev", 00:05:42.308 "bdev_lvol_set_parent", 00:05:42.308 "bdev_lvol_check_shallow_copy", 00:05:42.308 "bdev_lvol_start_shallow_copy", 00:05:42.308 "bdev_lvol_grow_lvstore", 00:05:42.308 "bdev_lvol_get_lvols", 00:05:42.308 "bdev_lvol_get_lvstores", 00:05:42.308 "bdev_lvol_delete", 00:05:42.308 "bdev_lvol_set_read_only", 00:05:42.308 "bdev_lvol_resize", 00:05:42.308 "bdev_lvol_decouple_parent", 00:05:42.308 "bdev_lvol_inflate", 00:05:42.308 "bdev_lvol_rename", 00:05:42.308 "bdev_lvol_clone_bdev", 00:05:42.308 "bdev_lvol_clone", 00:05:42.308 "bdev_lvol_snapshot", 00:05:42.308 "bdev_lvol_create", 00:05:42.308 "bdev_lvol_delete_lvstore", 00:05:42.308 "bdev_lvol_rename_lvstore", 00:05:42.308 "bdev_lvol_create_lvstore", 00:05:42.308 "bdev_raid_set_options", 00:05:42.308 "bdev_raid_remove_base_bdev", 00:05:42.308 "bdev_raid_add_base_bdev", 00:05:42.308 "bdev_raid_delete", 00:05:42.308 "bdev_raid_create", 00:05:42.308 "bdev_raid_get_bdevs", 00:05:42.308 "bdev_error_inject_error", 00:05:42.308 "bdev_error_delete", 00:05:42.308 "bdev_error_create", 00:05:42.308 "bdev_split_delete", 00:05:42.308 "bdev_split_create", 00:05:42.308 "bdev_delay_delete", 00:05:42.308 "bdev_delay_create", 00:05:42.308 "bdev_delay_update_latency", 00:05:42.308 "bdev_zone_block_delete", 00:05:42.308 "bdev_zone_block_create", 00:05:42.308 "blobfs_create", 00:05:42.308 "blobfs_detect", 00:05:42.308 "blobfs_set_cache_size", 00:05:42.308 "bdev_aio_delete", 00:05:42.308 "bdev_aio_rescan", 00:05:42.308 "bdev_aio_create", 00:05:42.308 "bdev_ftl_set_property", 00:05:42.308 "bdev_ftl_get_properties", 00:05:42.308 "bdev_ftl_get_stats", 00:05:42.308 "bdev_ftl_unmap", 00:05:42.308 "bdev_ftl_unload", 00:05:42.308 "bdev_ftl_delete", 00:05:42.308 "bdev_ftl_load", 00:05:42.308 "bdev_ftl_create", 00:05:42.308 "bdev_virtio_attach_controller", 00:05:42.308 "bdev_virtio_scsi_get_devices", 00:05:42.308 "bdev_virtio_detach_controller", 00:05:42.308 "bdev_virtio_blk_set_hotplug", 00:05:42.308 "bdev_iscsi_delete", 00:05:42.308 "bdev_iscsi_create", 00:05:42.308 "bdev_iscsi_set_options", 00:05:42.308 "bdev_uring_delete", 00:05:42.308 "bdev_uring_rescan", 00:05:42.308 "bdev_uring_create", 00:05:42.308 "accel_error_inject_error", 00:05:42.308 "ioat_scan_accel_module", 00:05:42.308 "dsa_scan_accel_module", 00:05:42.308 "iaa_scan_accel_module", 00:05:42.308 "keyring_file_remove_key", 00:05:42.308 "keyring_file_add_key", 00:05:42.308 "keyring_linux_set_options", 00:05:42.308 "iscsi_get_histogram", 00:05:42.308 "iscsi_enable_histogram", 00:05:42.308 "iscsi_set_options", 00:05:42.308 "iscsi_get_auth_groups", 00:05:42.308 "iscsi_auth_group_remove_secret", 00:05:42.308 "iscsi_auth_group_add_secret", 00:05:42.308 "iscsi_delete_auth_group", 00:05:42.308 "iscsi_create_auth_group", 00:05:42.308 "iscsi_set_discovery_auth", 00:05:42.308 "iscsi_get_options", 00:05:42.308 "iscsi_target_node_request_logout", 00:05:42.308 "iscsi_target_node_set_redirect", 00:05:42.308 "iscsi_target_node_set_auth", 00:05:42.308 "iscsi_target_node_add_lun", 00:05:42.308 "iscsi_get_stats", 00:05:42.308 "iscsi_get_connections", 00:05:42.308 "iscsi_portal_group_set_auth", 00:05:42.308 "iscsi_start_portal_group", 00:05:42.308 "iscsi_delete_portal_group", 00:05:42.308 "iscsi_create_portal_group", 00:05:42.308 "iscsi_get_portal_groups", 00:05:42.308 "iscsi_delete_target_node", 00:05:42.308 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.309 "iscsi_target_node_add_pg_ig_maps", 00:05:42.309 "iscsi_create_target_node", 00:05:42.309 "iscsi_get_target_nodes", 00:05:42.309 "iscsi_delete_initiator_group", 00:05:42.309 "iscsi_initiator_group_remove_initiators", 00:05:42.309 "iscsi_initiator_group_add_initiators", 00:05:42.309 "iscsi_create_initiator_group", 00:05:42.309 "iscsi_get_initiator_groups", 00:05:42.309 "nvmf_set_crdt", 00:05:42.309 "nvmf_set_config", 00:05:42.309 "nvmf_set_max_subsystems", 00:05:42.309 "nvmf_stop_mdns_prr", 00:05:42.309 "nvmf_publish_mdns_prr", 00:05:42.309 "nvmf_subsystem_get_listeners", 00:05:42.309 "nvmf_subsystem_get_qpairs", 00:05:42.309 "nvmf_subsystem_get_controllers", 00:05:42.309 "nvmf_get_stats", 00:05:42.309 "nvmf_get_transports", 00:05:42.309 "nvmf_create_transport", 00:05:42.309 "nvmf_get_targets", 00:05:42.309 "nvmf_delete_target", 00:05:42.309 "nvmf_create_target", 00:05:42.309 "nvmf_subsystem_allow_any_host", 00:05:42.309 "nvmf_subsystem_remove_host", 00:05:42.309 "nvmf_subsystem_add_host", 00:05:42.309 "nvmf_ns_remove_host", 00:05:42.309 "nvmf_ns_add_host", 00:05:42.309 "nvmf_subsystem_remove_ns", 00:05:42.309 "nvmf_subsystem_add_ns", 00:05:42.309 "nvmf_subsystem_listener_set_ana_state", 00:05:42.309 "nvmf_discovery_get_referrals", 00:05:42.309 "nvmf_discovery_remove_referral", 00:05:42.309 "nvmf_discovery_add_referral", 00:05:42.309 "nvmf_subsystem_remove_listener", 00:05:42.309 "nvmf_subsystem_add_listener", 00:05:42.309 "nvmf_delete_subsystem", 00:05:42.309 "nvmf_create_subsystem", 00:05:42.309 "nvmf_get_subsystems", 00:05:42.309 "env_dpdk_get_mem_stats", 00:05:42.309 "nbd_get_disks", 00:05:42.309 "nbd_stop_disk", 00:05:42.309 "nbd_start_disk", 00:05:42.309 "ublk_recover_disk", 00:05:42.309 "ublk_get_disks", 00:05:42.309 "ublk_stop_disk", 00:05:42.309 "ublk_start_disk", 00:05:42.309 "ublk_destroy_target", 00:05:42.309 "ublk_create_target", 00:05:42.309 "virtio_blk_create_transport", 00:05:42.309 "virtio_blk_get_transports", 00:05:42.309 "vhost_controller_set_coalescing", 00:05:42.309 "vhost_get_controllers", 00:05:42.309 "vhost_delete_controller", 00:05:42.309 "vhost_create_blk_controller", 00:05:42.309 "vhost_scsi_controller_remove_target", 00:05:42.309 "vhost_scsi_controller_add_target", 00:05:42.309 "vhost_start_scsi_controller", 00:05:42.309 "vhost_create_scsi_controller", 00:05:42.309 "thread_set_cpumask", 00:05:42.309 "framework_get_governor", 00:05:42.309 "framework_get_scheduler", 00:05:42.309 "framework_set_scheduler", 00:05:42.309 "framework_get_reactors", 00:05:42.309 "thread_get_io_channels", 00:05:42.309 "thread_get_pollers", 00:05:42.309 "thread_get_stats", 00:05:42.309 "framework_monitor_context_switch", 00:05:42.309 "spdk_kill_instance", 00:05:42.309 "log_enable_timestamps", 00:05:42.309 "log_get_flags", 00:05:42.309 "log_clear_flag", 00:05:42.309 "log_set_flag", 00:05:42.309 "log_get_level", 00:05:42.309 "log_set_level", 00:05:42.309 "log_get_print_level", 00:05:42.309 "log_set_print_level", 00:05:42.309 "framework_enable_cpumask_locks", 00:05:42.309 "framework_disable_cpumask_locks", 00:05:42.309 "framework_wait_init", 00:05:42.309 "framework_start_init", 00:05:42.309 "scsi_get_devices", 00:05:42.309 "bdev_get_histogram", 00:05:42.309 "bdev_enable_histogram", 00:05:42.309 "bdev_set_qos_limit", 00:05:42.309 "bdev_set_qd_sampling_period", 00:05:42.309 "bdev_get_bdevs", 00:05:42.309 "bdev_reset_iostat", 00:05:42.309 "bdev_get_iostat", 00:05:42.309 "bdev_examine", 00:05:42.309 "bdev_wait_for_examine", 00:05:42.309 "bdev_set_options", 00:05:42.309 "notify_get_notifications", 00:05:42.309 "notify_get_types", 00:05:42.309 "accel_get_stats", 00:05:42.309 "accel_set_options", 00:05:42.309 "accel_set_driver", 00:05:42.309 "accel_crypto_key_destroy", 00:05:42.309 "accel_crypto_keys_get", 00:05:42.309 "accel_crypto_key_create", 00:05:42.309 "accel_assign_opc", 00:05:42.309 "accel_get_module_info", 00:05:42.309 "accel_get_opc_assignments", 00:05:42.309 "vmd_rescan", 00:05:42.309 "vmd_remove_device", 00:05:42.309 "vmd_enable", 00:05:42.309 "sock_get_default_impl", 00:05:42.309 "sock_set_default_impl", 00:05:42.309 "sock_impl_set_options", 00:05:42.309 "sock_impl_get_options", 00:05:42.309 "iobuf_get_stats", 00:05:42.309 "iobuf_set_options", 00:05:42.309 "framework_get_pci_devices", 00:05:42.309 "framework_get_config", 00:05:42.309 "framework_get_subsystems", 00:05:42.309 "trace_get_info", 00:05:42.309 "trace_get_tpoint_group_mask", 00:05:42.309 "trace_disable_tpoint_group", 00:05:42.309 "trace_enable_tpoint_group", 00:05:42.309 "trace_clear_tpoint_mask", 00:05:42.309 "trace_set_tpoint_mask", 00:05:42.309 "keyring_get_keys", 00:05:42.309 "spdk_get_version", 00:05:42.309 "rpc_get_methods" 00:05:42.309 ] 00:05:42.309 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.309 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.309 20:46:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 68619 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 68619 ']' 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 68619 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68619 00:05:42.309 killing process with pid 68619 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68619' 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 68619 00:05:42.309 20:46:52 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 68619 00:05:42.568 ************************************ 00:05:42.568 END TEST spdkcli_tcp 00:05:42.568 ************************************ 00:05:42.568 00:05:42.568 real 0m1.276s 00:05:42.568 user 0m2.225s 00:05:42.568 sys 0m0.425s 00:05:42.568 20:46:53 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.568 20:46:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.826 20:46:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.826 20:46:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:42.826 20:46:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.826 20:46:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.826 ************************************ 00:05:42.826 START TEST dpdk_mem_utility 00:05:42.826 ************************************ 00:05:42.826 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:42.826 * Looking for test storage... 00:05:42.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:42.826 20:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:42.826 20:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68698 00:05:42.826 20:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.826 20:46:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68698 00:05:42.826 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 68698 ']' 00:05:42.826 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.826 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:42.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.826 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.827 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:42.827 20:46:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.827 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:42.827 [2024-08-11 20:46:53.566615] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:42.827 [2024-08-11 20:46:53.567018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68698 ] 00:05:43.085 [2024-08-11 20:46:53.714749] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.085 [2024-08-11 20:46:53.777397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.085 [2024-08-11 20:46:53.830971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.344 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.344 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:43.344 20:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:43.344 20:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:43.344 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:43.345 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.345 { 00:05:43.345 "filename": "/tmp/spdk_mem_dump.txt" 00:05:43.345 } 00:05:43.345 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:43.345 20:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:43.345 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:43.345 1 heaps totaling size 814.000000 MiB 00:05:43.345 size: 814.000000 MiB heap id: 0 00:05:43.345 end heaps---------- 00:05:43.345 8 mempools totaling size 598.116089 MiB 00:05:43.345 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:43.345 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:43.345 size: 84.521057 MiB name: bdev_io_68698 00:05:43.345 size: 51.011292 MiB name: evtpool_68698 00:05:43.345 size: 50.003479 MiB name: msgpool_68698 00:05:43.345 size: 21.763794 MiB name: PDU_Pool 00:05:43.345 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:43.345 size: 0.026123 MiB name: Session_Pool 00:05:43.345 end mempools------- 00:05:43.345 6 memzones totaling size 4.142822 MiB 00:05:43.345 size: 1.000366 MiB name: RG_ring_0_68698 00:05:43.345 size: 1.000366 MiB name: RG_ring_1_68698 00:05:43.345 size: 1.000366 MiB name: RG_ring_4_68698 00:05:43.345 size: 1.000366 MiB name: RG_ring_5_68698 00:05:43.345 size: 0.125366 MiB name: RG_ring_2_68698 00:05:43.345 size: 0.015991 MiB name: RG_ring_3_68698 00:05:43.345 end memzones------- 00:05:43.345 20:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:43.604 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:05:43.604 list of free elements. size: 12.471375 MiB 00:05:43.604 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:43.604 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:43.604 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:43.604 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:43.604 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:43.605 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:43.605 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:43.605 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:43.605 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:43.605 element at address: 0x20001aa00000 with size: 0.568054 MiB 00:05:43.605 element at address: 0x20000b200000 with size: 0.489624 MiB 00:05:43.605 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:43.605 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:43.605 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:43.605 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:43.605 list of standard malloc elements. size: 199.266052 MiB 00:05:43.605 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:43.605 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:43.605 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:43.605 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:43.605 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:43.605 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:43.605 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:43.605 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:43.605 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:43.605 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:43.605 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:43.606 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:43.606 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:43.607 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:43.607 list of memzone associated elements. size: 602.262573 MiB 00:05:43.607 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:43.607 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:43.607 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:43.607 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:43.607 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:43.607 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68698_0 00:05:43.607 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:43.607 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68698_0 00:05:43.607 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:43.607 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68698_0 00:05:43.607 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:43.607 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:43.607 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:43.607 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:43.607 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:43.607 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68698 00:05:43.607 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:43.607 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68698 00:05:43.607 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:43.607 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68698 00:05:43.607 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:43.607 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:43.607 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:43.607 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:43.607 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:43.607 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:43.607 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:43.607 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:43.607 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:43.607 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68698 00:05:43.607 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:43.607 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68698 00:05:43.607 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:43.607 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68698 00:05:43.607 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:43.607 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68698 00:05:43.607 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:43.607 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68698 00:05:43.607 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:43.607 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:43.607 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:43.607 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:43.607 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:43.607 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:43.607 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:43.607 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68698 00:05:43.607 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:43.607 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:43.607 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:43.607 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:43.607 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:43.607 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68698 00:05:43.607 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:43.607 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:43.607 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:43.607 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68698 00:05:43.607 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:43.607 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68698 00:05:43.607 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:43.607 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:43.607 20:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:43.607 20:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68698 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 68698 ']' 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 68698 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68698 00:05:43.608 killing process with pid 68698 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68698' 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 68698 00:05:43.608 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 68698 00:05:43.867 00:05:43.867 real 0m1.160s 00:05:43.867 user 0m1.101s 00:05:43.867 sys 0m0.438s 00:05:43.867 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.867 ************************************ 00:05:43.867 END TEST dpdk_mem_utility 00:05:43.867 20:46:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.867 ************************************ 00:05:43.867 20:46:54 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:43.867 20:46:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:43.867 20:46:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.867 20:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.867 ************************************ 00:05:43.867 START TEST event 00:05:43.867 ************************************ 00:05:43.867 20:46:54 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:44.126 * Looking for test storage... 00:05:44.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:44.126 20:46:54 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:44.126 20:46:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:44.126 20:46:54 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.126 20:46:54 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:44.126 20:46:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.126 20:46:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.126 ************************************ 00:05:44.126 START TEST event_perf 00:05:44.126 ************************************ 00:05:44.126 20:46:54 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:44.126 Running I/O for 1 seconds...Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:44.126 [2024-08-11 20:46:54.715168] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:44.126 [2024-08-11 20:46:54.715284] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68762 ] 00:05:44.126 [2024-08-11 20:46:54.851226] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.386 [2024-08-11 20:46:54.928234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.386 [2024-08-11 20:46:54.928350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.386 [2024-08-11 20:46:54.928488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.386 [2024-08-11 20:46:54.928506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.327 Running I/O for 1 seconds... 00:05:45.327 lcore 0: 209798 00:05:45.328 lcore 1: 209800 00:05:45.328 lcore 2: 209800 00:05:45.328 lcore 3: 209798 00:05:45.328 done. 00:05:45.328 00:05:45.328 real 0m1.301s 00:05:45.328 user 0m4.116s 00:05:45.328 sys 0m0.066s 00:05:45.328 ************************************ 00:05:45.328 END TEST event_perf 00:05:45.328 ************************************ 00:05:45.328 20:46:55 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.328 20:46:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.328 20:46:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:45.328 20:46:56 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:45.328 20:46:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.328 20:46:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.328 ************************************ 00:05:45.328 START TEST event_reactor 00:05:45.328 ************************************ 00:05:45.328 20:46:56 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:45.328 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:45.328 [2024-08-11 20:46:56.081073] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:45.328 [2024-08-11 20:46:56.081186] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68800 ] 00:05:45.602 [2024-08-11 20:46:56.227713] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.602 [2024-08-11 20:46:56.372461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.979 test_start 00:05:46.979 oneshot 00:05:46.979 tick 100 00:05:46.979 tick 100 00:05:46.979 tick 250 00:05:46.979 tick 100 00:05:46.979 tick 100 00:05:46.979 tick 250 00:05:46.979 tick 500 00:05:46.979 tick 100 00:05:46.979 tick 100 00:05:46.979 tick 100 00:05:46.979 tick 250 00:05:46.979 tick 100 00:05:46.979 tick 100 00:05:46.979 test_end 00:05:46.979 00:05:46.979 real 0m1.380s 00:05:46.979 user 0m1.185s 00:05:46.979 sys 0m0.086s 00:05:46.979 ************************************ 00:05:46.979 END TEST event_reactor 00:05:46.979 ************************************ 00:05:46.979 20:46:57 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.979 20:46:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:46.979 20:46:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.979 20:46:57 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:46.979 20:46:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.979 20:46:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.979 ************************************ 00:05:46.979 START TEST event_reactor_perf 00:05:46.979 ************************************ 00:05:46.979 20:46:57 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:46.979 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:46.979 [2024-08-11 20:46:57.515179] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:46.979 [2024-08-11 20:46:57.515442] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68836 ] 00:05:46.979 [2024-08-11 20:46:57.654787] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.979 [2024-08-11 20:46:57.733111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.354 test_start 00:05:48.354 test_end 00:05:48.354 Performance: 349852 events per second 00:05:48.354 00:05:48.354 real 0m1.316s 00:05:48.354 user 0m1.147s 00:05:48.354 sys 0m0.057s 00:05:48.354 20:46:58 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.354 ************************************ 00:05:48.354 END TEST event_reactor_perf 00:05:48.354 ************************************ 00:05:48.354 20:46:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 20:46:58 event -- event/event.sh@49 -- # uname -s 00:05:48.354 20:46:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:48.354 20:46:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:48.354 20:46:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.354 20:46:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.354 20:46:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 ************************************ 00:05:48.354 START TEST event_scheduler 00:05:48.354 ************************************ 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:48.354 * Looking for test storage... 00:05:48.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:48.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.354 20:46:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:48.354 20:46:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=68892 00:05:48.354 20:46:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.354 20:46:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 68892 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 68892 ']' 00:05:48.354 20:46:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.354 20:46:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.354 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:48.354 [2024-08-11 20:46:59.012839] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:48.354 [2024-08-11 20:46:59.012947] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68892 ] 00:05:48.613 [2024-08-11 20:46:59.156430] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.613 [2024-08-11 20:46:59.271474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.613 [2024-08-11 20:46:59.271533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.613 [2024-08-11 20:46:59.271652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.613 [2024-08-11 20:46:59.271652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:48.613 20:46:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.613 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.613 POWER: Cannot set governor of lcore 0 to performance 00:05:48.613 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:48.613 POWER: Cannot set governor of lcore 0 to userspace 00:05:48.613 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:48.613 POWER: Unable to set Power Management Environment for lcore 0 00:05:48.613 [2024-08-11 20:46:59.326952] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:48.613 [2024-08-11 20:46:59.327111] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:48.613 [2024-08-11 20:46:59.327217] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:48.613 [2024-08-11 20:46:59.327443] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:48.613 [2024-08-11 20:46:59.327563] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:48.613 [2024-08-11 20:46:59.327684] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.613 20:46:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.613 20:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 [2024-08-11 20:46:59.395344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.871 [2024-08-11 20:46:59.432708] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:48.871 20:46:59 event.event_scheduler -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.871 20:46:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:48.871 20:46:59 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.871 20:46:59 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 ************************************ 00:05:48.871 START TEST scheduler_create_thread 00:05:48.871 ************************************ 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 2 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 3 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 4 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 5 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 6 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.871 7 00:05:48.871 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.872 8 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.872 9 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.872 10 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:48.872 20:46:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.438 20:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:49.438 20:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.438 20:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:49.438 20:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.812 20:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:50.812 20:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.812 20:47:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.812 20:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@557 -- # xtrace_disable 00:05:50.812 20:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.186 ************************************ 00:05:52.186 END TEST scheduler_create_thread 00:05:52.186 ************************************ 00:05:52.186 20:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:05:52.186 00:05:52.186 real 0m3.095s 00:05:52.186 user 0m0.017s 00:05:52.186 sys 0m0.009s 00:05:52.186 20:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.186 20:47:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.186 20:47:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.186 20:47:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 68892 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 68892 ']' 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 68892 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68892 00:05:52.186 killing process with pid 68892 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68892' 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 68892 00:05:52.186 20:47:02 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 68892 00:05:52.186 [2024-08-11 20:47:02.921204] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.445 00:05:52.445 real 0m4.281s 00:05:52.445 user 0m6.795s 00:05:52.445 sys 0m0.381s 00:05:52.445 20:47:03 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.445 20:47:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.445 ************************************ 00:05:52.445 END TEST event_scheduler 00:05:52.445 ************************************ 00:05:52.445 20:47:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.445 20:47:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.445 20:47:03 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.445 20:47:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.445 20:47:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.445 ************************************ 00:05:52.445 START TEST app_repeat 00:05:52.445 ************************************ 00:05:52.445 20:47:03 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.445 20:47:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.702 Process app_repeat pid: 68988 00:05:52.702 spdk_app_start Round 0 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=68988 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68988' 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.702 20:47:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 68988 /var/tmp/spdk-nbd.sock 00:05:52.702 20:47:03 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 68988 ']' 00:05:52.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.702 20:47:03 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.702 20:47:03 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.702 20:47:03 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.702 20:47:03 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.702 20:47:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.702 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:05:52.702 [2024-08-11 20:47:03.248528] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:05:52.702 [2024-08-11 20:47:03.248636] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68988 ] 00:05:52.702 [2024-08-11 20:47:03.387317] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.702 [2024-08-11 20:47:03.467154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.702 [2024-08-11 20:47:03.467162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.960 [2024-08-11 20:47:03.522046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.960 20:47:03 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.960 20:47:03 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:52.960 20:47:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.219 Malloc0 00:05:53.219 20:47:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.478 Malloc1 00:05:53.478 20:47:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.478 20:47:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.045 /dev/nbd0 00:05:54.045 20:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.045 20:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.045 1+0 records in 00:05:54.045 1+0 records out 00:05:54.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375426 s, 10.9 MB/s 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:54.045 20:47:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:54.045 20:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.045 20:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.045 20:47:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.304 /dev/nbd1 00:05:54.304 20:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.304 20:47:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.304 20:47:04 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.305 1+0 records in 00:05:54.305 1+0 records out 00:05:54.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700922 s, 5.8 MB/s 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:54.305 20:47:04 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:54.305 20:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.305 20:47:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.305 20:47:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.305 20:47:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.305 20:47:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.563 20:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.563 { 00:05:54.563 "nbd_device": "/dev/nbd0", 00:05:54.563 "bdev_name": "Malloc0" 00:05:54.563 }, 00:05:54.563 { 00:05:54.564 "nbd_device": "/dev/nbd1", 00:05:54.564 "bdev_name": "Malloc1" 00:05:54.564 } 00:05:54.564 ]' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.564 { 00:05:54.564 "nbd_device": "/dev/nbd0", 00:05:54.564 "bdev_name": "Malloc0" 00:05:54.564 }, 00:05:54.564 { 00:05:54.564 "nbd_device": "/dev/nbd1", 00:05:54.564 "bdev_name": "Malloc1" 00:05:54.564 } 00:05:54.564 ]' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.564 /dev/nbd1' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.564 /dev/nbd1' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.564 256+0 records in 00:05:54.564 256+0 records out 00:05:54.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700196 s, 150 MB/s 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.564 256+0 records in 00:05:54.564 256+0 records out 00:05:54.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253849 s, 41.3 MB/s 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.564 256+0 records in 00:05:54.564 256+0 records out 00:05:54.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028597 s, 36.7 MB/s 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.564 20:47:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.132 20:47:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.391 20:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.649 20:47:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.650 20:47:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.908 20:47:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.167 [2024-08-11 20:47:06.802801] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.167 [2024-08-11 20:47:06.872641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.167 [2024-08-11 20:47:06.872646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.167 [2024-08-11 20:47:06.926662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.167 [2024-08-11 20:47:06.926760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.167 [2024-08-11 20:47:06.926773] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.521 spdk_app_start Round 1 00:05:59.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.521 20:47:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.521 20:47:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.521 20:47:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 68988 /var/tmp/spdk-nbd.sock 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 68988 ']' 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.521 20:47:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:59.521 20:47:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.521 Malloc0 00:05:59.780 20:47:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.038 Malloc1 00:06:00.038 20:47:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.038 20:47:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.298 /dev/nbd0 00:06:00.298 20:47:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.298 20:47:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.298 1+0 records in 00:06:00.298 1+0 records out 00:06:00.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535439 s, 7.6 MB/s 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:00.298 20:47:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:00.298 20:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.298 20:47:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.298 20:47:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.557 /dev/nbd1 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.557 1+0 records in 00:06:00.557 1+0 records out 00:06:00.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028697 s, 14.3 MB/s 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:00.557 20:47:11 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.557 20:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.815 { 00:06:00.815 "nbd_device": "/dev/nbd0", 00:06:00.815 "bdev_name": "Malloc0" 00:06:00.815 }, 00:06:00.815 { 00:06:00.815 "nbd_device": "/dev/nbd1", 00:06:00.815 "bdev_name": "Malloc1" 00:06:00.815 } 00:06:00.815 ]' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.815 { 00:06:00.815 "nbd_device": "/dev/nbd0", 00:06:00.815 "bdev_name": "Malloc0" 00:06:00.815 }, 00:06:00.815 { 00:06:00.815 "nbd_device": "/dev/nbd1", 00:06:00.815 "bdev_name": "Malloc1" 00:06:00.815 } 00:06:00.815 ]' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.815 /dev/nbd1' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.815 /dev/nbd1' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.815 256+0 records in 00:06:00.815 256+0 records out 00:06:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664721 s, 158 MB/s 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.815 256+0 records in 00:06:00.815 256+0 records out 00:06:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228291 s, 45.9 MB/s 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.815 256+0 records in 00:06:00.815 256+0 records out 00:06:00.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262339 s, 40.0 MB/s 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.815 20:47:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.073 20:47:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.332 20:47:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.332 20:47:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.898 20:47:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.898 20:47:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.156 20:47:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.414 [2024-08-11 20:47:13.039976] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.414 [2024-08-11 20:47:13.108621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.414 [2024-08-11 20:47:13.108635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.673 [2024-08-11 20:47:13.193448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.673 [2024-08-11 20:47:13.193584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.673 [2024-08-11 20:47:13.193597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.205 spdk_app_start Round 2 00:06:05.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.205 20:47:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.205 20:47:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.205 20:47:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 68988 /var/tmp/spdk-nbd.sock 00:06:05.205 20:47:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 68988 ']' 00:06:05.205 20:47:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.205 20:47:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.205 20:47:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.205 20:47:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.205 20:47:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.463 20:47:16 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.463 20:47:16 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:05.463 20:47:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.722 Malloc0 00:06:05.722 20:47:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.980 Malloc1 00:06:05.980 20:47:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.980 20:47:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.238 /dev/nbd0 00:06:06.238 20:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.238 20:47:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.238 1+0 records in 00:06:06.238 1+0 records out 00:06:06.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237034 s, 17.3 MB/s 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:06.238 20:47:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:06.238 20:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.238 20:47:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.238 20:47:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.497 /dev/nbd1 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.497 1+0 records in 00:06:06.497 1+0 records out 00:06:06.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286237 s, 14.3 MB/s 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:06.497 20:47:17 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.497 20:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.756 { 00:06:06.756 "nbd_device": "/dev/nbd0", 00:06:06.756 "bdev_name": "Malloc0" 00:06:06.756 }, 00:06:06.756 { 00:06:06.756 "nbd_device": "/dev/nbd1", 00:06:06.756 "bdev_name": "Malloc1" 00:06:06.756 } 00:06:06.756 ]' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.756 { 00:06:06.756 "nbd_device": "/dev/nbd0", 00:06:06.756 "bdev_name": "Malloc0" 00:06:06.756 }, 00:06:06.756 { 00:06:06.756 "nbd_device": "/dev/nbd1", 00:06:06.756 "bdev_name": "Malloc1" 00:06:06.756 } 00:06:06.756 ]' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.756 /dev/nbd1' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.756 /dev/nbd1' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.756 256+0 records in 00:06:06.756 256+0 records out 00:06:06.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106339 s, 98.6 MB/s 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.756 256+0 records in 00:06:06.756 256+0 records out 00:06:06.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202944 s, 51.7 MB/s 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.756 20:47:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.015 256+0 records in 00:06:07.015 256+0 records out 00:06:07.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206531 s, 50.8 MB/s 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.015 20:47:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.274 20:47:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.533 20:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.791 20:47:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.791 20:47:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.049 20:47:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.308 [2024-08-11 20:47:18.922499] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.308 [2024-08-11 20:47:18.973148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.308 [2024-08-11 20:47:18.973160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.308 [2024-08-11 20:47:19.028871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.308 [2024-08-11 20:47:19.029023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.308 [2024-08-11 20:47:19.029037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.595 20:47:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 68988 /var/tmp/spdk-nbd.sock 00:06:11.595 20:47:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 68988 ']' 00:06:11.595 20:47:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.595 20:47:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.595 20:47:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.595 20:47:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.595 20:47:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:11.595 20:47:22 event.app_repeat -- event/event.sh@39 -- # killprocess 68988 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 68988 ']' 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 68988 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68988 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68988' 00:06:11.595 killing process with pid 68988 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@965 -- # kill 68988 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@970 -- # wait 68988 00:06:11.595 spdk_app_start is called in Round 0. 00:06:11.595 Shutdown signal received, stop current app iteration 00:06:11.595 Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 reinitialization... 00:06:11.595 spdk_app_start is called in Round 1. 00:06:11.595 Shutdown signal received, stop current app iteration 00:06:11.595 Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 reinitialization... 00:06:11.595 spdk_app_start is called in Round 2. 00:06:11.595 Shutdown signal received, stop current app iteration 00:06:11.595 Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 reinitialization... 00:06:11.595 spdk_app_start is called in Round 3. 00:06:11.595 Shutdown signal received, stop current app iteration 00:06:11.595 20:47:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.595 20:47:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.595 00:06:11.595 real 0m19.025s 00:06:11.595 user 0m43.201s 00:06:11.595 sys 0m2.969s 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.595 20:47:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.595 ************************************ 00:06:11.595 END TEST app_repeat 00:06:11.595 ************************************ 00:06:11.595 20:47:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.595 20:47:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.595 20:47:22 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.595 20:47:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.595 20:47:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.595 ************************************ 00:06:11.595 START TEST cpu_locks 00:06:11.595 ************************************ 00:06:11.595 20:47:22 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.853 * Looking for test storage... 00:06:11.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:11.853 20:47:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.853 20:47:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.853 20:47:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.853 20:47:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.853 20:47:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.853 20:47:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.853 20:47:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.853 ************************************ 00:06:11.853 START TEST default_locks 00:06:11.853 ************************************ 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:11.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69420 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 69420 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 69420 ']' 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.853 20:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.853 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:11.853 [2024-08-11 20:47:22.458540] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:11.853 [2024-08-11 20:47:22.458822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69420 ] 00:06:11.853 [2024-08-11 20:47:22.595637] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.111 [2024-08-11 20:47:22.657753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.111 [2024-08-11 20:47:22.709378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.677 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:12.677 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:12.677 20:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 69420 00:06:12.677 20:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 69420 00:06:12.677 20:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 69420 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 69420 ']' 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 69420 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69420 00:06:12.935 killing process with pid 69420 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69420' 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 69420 00:06:12.935 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 69420 00:06:13.501 20:47:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69420 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@646 -- # local es=0 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69420 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # waitforlisten 69420 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 69420 ']' 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.502 ERROR: process (pid: 69420) is no longer running 00:06:13.502 20:47:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69420) - No such process 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # es=1 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:13.502 00:06:13.502 real 0m1.608s 00:06:13.502 user 0m1.649s 00:06:13.502 sys 0m0.491s 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.502 20:47:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 ************************************ 00:06:13.502 END TEST default_locks 00:06:13.502 ************************************ 00:06:13.502 20:47:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:13.502 20:47:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.502 20:47:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.502 20:47:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 ************************************ 00:06:13.502 START TEST default_locks_via_rpc 00:06:13.502 ************************************ 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:13.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69472 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 69472 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69472 ']' 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:13.502 20:47:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.502 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:13.502 [2024-08-11 20:47:24.108444] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:13.502 [2024-08-11 20:47:24.108535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69472 ] 00:06:13.502 [2024-08-11 20:47:24.239929] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.760 [2024-08-11 20:47:24.302275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.760 [2024-08-11 20:47:24.352944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 69472 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 69472 00:06:14.326 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 69472 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 69472 ']' 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 69472 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69472 00:06:14.892 killing process with pid 69472 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69472' 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 69472 00:06:14.892 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 69472 00:06:15.459 ************************************ 00:06:15.459 END TEST default_locks_via_rpc 00:06:15.459 ************************************ 00:06:15.459 00:06:15.459 real 0m1.930s 00:06:15.459 user 0m2.060s 00:06:15.459 sys 0m0.543s 00:06:15.459 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.459 20:47:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.459 20:47:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:15.459 20:47:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:15.459 20:47:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.459 20:47:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.459 ************************************ 00:06:15.459 START TEST non_locking_app_on_locked_coremask 00:06:15.459 ************************************ 00:06:15.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69518 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 69518 /var/tmp/spdk.sock 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69518 ']' 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.459 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.459 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:15.459 [2024-08-11 20:47:26.118306] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:15.459 [2024-08-11 20:47:26.118569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69518 ] 00:06:15.717 [2024-08-11 20:47:26.257859] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.717 [2024-08-11 20:47:26.342245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.717 [2024-08-11 20:47:26.400658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69526 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 69526 /var/tmp/spdk2.sock 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69526 ']' 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:15.975 20:47:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:15.975 [2024-08-11 20:47:26.677725] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:15.975 [2024-08-11 20:47:26.677972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69526 ] 00:06:16.233 [2024-08-11 20:47:26.824414] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.233 [2024-08-11 20:47:26.824450] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.233 [2024-08-11 20:47:27.005317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.491 [2024-08-11 20:47:27.123291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.058 20:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.058 20:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:17.058 20:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 69518 00:06:17.058 20:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69518 00:06:17.058 20:47:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 69518 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69518 ']' 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69518 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69518 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.992 killing process with pid 69518 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69518' 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69518 00:06:17.992 20:47:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69518 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 69526 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69526 ']' 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69526 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69526 00:06:18.927 killing process with pid 69526 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69526' 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69526 00:06:18.927 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69526 00:06:19.186 00:06:19.186 real 0m3.801s 00:06:19.186 user 0m4.124s 00:06:19.186 sys 0m1.197s 00:06:19.186 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.186 20:47:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 ************************************ 00:06:19.186 END TEST non_locking_app_on_locked_coremask 00:06:19.186 ************************************ 00:06:19.186 20:47:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:19.186 20:47:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.186 20:47:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.186 20:47:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 ************************************ 00:06:19.186 START TEST locking_app_on_unlocked_coremask 00:06:19.186 ************************************ 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69593 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 69593 /var/tmp/spdk.sock 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69593 ']' 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.186 20:47:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.186 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:19.444 [2024-08-11 20:47:29.964460] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:19.444 [2024-08-11 20:47:29.964550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69593 ] 00:06:19.444 [2024-08-11 20:47:30.105794] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.444 [2024-08-11 20:47:30.105844] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.444 [2024-08-11 20:47:30.194857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.703 [2024-08-11 20:47:30.256467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69609 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 69609 /var/tmp/spdk2.sock 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69609 ']' 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:20.270 20:47:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.270 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:20.270 [2024-08-11 20:47:31.037457] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:20.270 [2024-08-11 20:47:31.037715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69609 ] 00:06:20.529 [2024-08-11 20:47:31.171540] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.788 [2024-08-11 20:47:31.308462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.788 [2024-08-11 20:47:31.414392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.354 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:21.354 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:21.354 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 69609 00:06:21.354 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.354 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69609 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 69593 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69593 ']' 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 69593 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69593 00:06:22.290 killing process with pid 69593 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69593' 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 69593 00:06:22.290 20:47:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 69593 00:06:23.252 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 69609 00:06:23.252 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69609 ']' 00:06:23.252 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 69609 00:06:23.252 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69609 00:06:23.253 killing process with pid 69609 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69609' 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 69609 00:06:23.253 20:47:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 69609 00:06:23.556 00:06:23.556 real 0m4.147s 00:06:23.556 user 0m4.663s 00:06:23.556 sys 0m1.146s 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.556 ************************************ 00:06:23.556 END TEST locking_app_on_unlocked_coremask 00:06:23.556 ************************************ 00:06:23.556 20:47:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.556 20:47:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.556 20:47:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.556 20:47:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.556 ************************************ 00:06:23.556 START TEST locking_app_on_locked_coremask 00:06:23.556 ************************************ 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69676 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 69676 /var/tmp/spdk.sock 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69676 ']' 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:23.556 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.556 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:23.556 [2024-08-11 20:47:34.163519] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:23.556 [2024-08-11 20:47:34.163642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69676 ] 00:06:23.556 [2024-08-11 20:47:34.302384] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.815 [2024-08-11 20:47:34.372364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.815 [2024-08-11 20:47:34.424866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69690 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69690 /var/tmp/spdk2.sock 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69690 /var/tmp/spdk2.sock 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # waitforlisten 69690 /var/tmp/spdk2.sock 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 69690 ']' 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.074 20:47:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.074 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:24.074 [2024-08-11 20:47:34.678261] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:24.074 [2024-08-11 20:47:34.678498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69690 ] 00:06:24.074 [2024-08-11 20:47:34.825165] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69676 has claimed it. 00:06:24.074 [2024-08-11 20:47:34.825239] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.011 ERROR: process (pid: 69690) is no longer running 00:06:25.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69690) - No such process 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 69676 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 69676 00:06:25.011 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 69676 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 69676 ']' 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 69676 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69676 00:06:25.270 killing process with pid 69676 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69676' 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 69676 00:06:25.270 20:47:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 69676 00:06:25.528 ************************************ 00:06:25.528 END TEST locking_app_on_locked_coremask 00:06:25.528 ************************************ 00:06:25.528 00:06:25.528 real 0m2.136s 00:06:25.528 user 0m2.399s 00:06:25.528 sys 0m0.611s 00:06:25.528 20:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.528 20:47:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.528 20:47:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:25.528 20:47:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.528 20:47:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.528 20:47:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.528 ************************************ 00:06:25.528 START TEST locking_overlapped_coremask 00:06:25.528 ************************************ 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69736 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 69736 /var/tmp/spdk.sock 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 69736 ']' 00:06:25.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.528 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.529 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.529 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.788 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:25.788 [2024-08-11 20:47:36.343510] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:25.788 [2024-08-11 20:47:36.343586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69736 ] 00:06:25.788 [2024-08-11 20:47:36.479791] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.047 [2024-08-11 20:47:36.581379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.047 [2024-08-11 20:47:36.581510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.047 [2024-08-11 20:47:36.581519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.047 [2024-08-11 20:47:36.641250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69746 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69746 /var/tmp/spdk2.sock 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@646 -- # local es=0 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # valid_exec_arg waitforlisten 69746 /var/tmp/spdk2.sock 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # local arg=waitforlisten 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # type -t waitforlisten 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # waitforlisten 69746 /var/tmp/spdk2.sock 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 69746 ']' 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:26.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:26.306 20:47:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.306 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:26.306 [2024-08-11 20:47:36.909991] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:26.306 [2024-08-11 20:47:36.910086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69746 ] 00:06:26.306 [2024-08-11 20:47:37.051186] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69736 has claimed it. 00:06:26.306 [2024-08-11 20:47:37.051252] app.c: 903:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.243 ERROR: process (pid: 69746) is no longer running 00:06:27.243 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (69746) - No such process 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # es=1 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 69736 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 69736 ']' 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 69736 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69736 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69736' 00:06:27.243 killing process with pid 69736 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 69736 00:06:27.243 20:47:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 69736 00:06:27.501 00:06:27.501 real 0m1.939s 00:06:27.501 user 0m5.195s 00:06:27.501 sys 0m0.432s 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.501 ************************************ 00:06:27.501 END TEST locking_overlapped_coremask 00:06:27.501 ************************************ 00:06:27.501 20:47:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.501 20:47:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.501 20:47:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.501 20:47:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.501 ************************************ 00:06:27.501 START TEST locking_overlapped_coremask_via_rpc 00:06:27.501 ************************************ 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69792 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 69792 /var/tmp/spdk.sock 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69792 ']' 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.501 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.759 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.759 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.759 20:47:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.759 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:27.759 [2024-08-11 20:47:38.340934] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:27.759 [2024-08-11 20:47:38.341048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69792 ] 00:06:27.759 [2024-08-11 20:47:38.477327] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.759 [2024-08-11 20:47:38.477388] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.016 [2024-08-11 20:47:38.557733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.016 [2024-08-11 20:47:38.557827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.016 [2024-08-11 20:47:38.557847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.016 [2024-08-11 20:47:38.615157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.582 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.582 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:28.582 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69810 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 69810 /var/tmp/spdk2.sock 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69810 ']' 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.841 20:47:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.841 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:28.841 [2024-08-11 20:47:39.405398] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:28.841 [2024-08-11 20:47:39.405484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69810 ] 00:06:28.841 [2024-08-11 20:47:39.541516] app.c: 907:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.841 [2024-08-11 20:47:39.541571] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.100 [2024-08-11 20:47:39.674844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.100 [2024-08-11 20:47:39.674995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.100 [2024-08-11 20:47:39.675004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.100 [2024-08-11 20:47:39.787836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@646 -- # local es=0 00:06:29.667 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.668 [2024-08-11 20:47:40.366854] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69792 has claimed it. 00:06:29.668 request: 00:06:29.668 { 00:06:29.668 "method": "framework_enable_cpumask_locks", 00:06:29.668 "req_id": 1 00:06:29.668 } 00:06:29.668 Got JSON-RPC error response 00:06:29.668 response: 00:06:29.668 { 00:06:29.668 "code": -32603, 00:06:29.668 "message": "Failed to claim CPU core: 2" 00:06:29.668 } 00:06:29.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # es=1 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 69792 /var/tmp/spdk.sock 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69792 ']' 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.668 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 69810 /var/tmp/spdk2.sock 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 69810 ']' 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.927 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.186 ************************************ 00:06:30.186 END TEST locking_overlapped_coremask_via_rpc 00:06:30.186 ************************************ 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.186 00:06:30.186 real 0m2.576s 00:06:30.186 user 0m1.336s 00:06:30.186 sys 0m0.176s 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.186 20:47:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.186 20:47:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.186 20:47:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 69792 ]] 00:06:30.186 20:47:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 69792 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69792 ']' 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69792 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69792 00:06:30.186 killing process with pid 69792 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69792' 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 69792 00:06:30.186 20:47:40 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 69792 00:06:30.753 20:47:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 69810 ]] 00:06:30.753 20:47:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 69810 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69810 ']' 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69810 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69810 00:06:30.753 killing process with pid 69810 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69810' 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 69810 00:06:30.753 20:47:41 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 69810 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.011 Process with pid 69792 is not found 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 69792 ]] 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 69792 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69792 ']' 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69792 00:06:31.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (69792) - No such process 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 69792 is not found' 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 69810 ]] 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 69810 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 69810 ']' 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 69810 00:06:31.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (69810) - No such process 00:06:31.011 Process with pid 69810 is not found 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 69810 is not found' 00:06:31.011 20:47:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:31.011 00:06:31.011 real 0m19.411s 00:06:31.011 user 0m33.677s 00:06:31.011 sys 0m5.425s 00:06:31.011 ************************************ 00:06:31.011 END TEST cpu_locks 00:06:31.011 ************************************ 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.011 20:47:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.011 ************************************ 00:06:31.011 END TEST event 00:06:31.011 ************************************ 00:06:31.011 00:06:31.011 real 0m47.141s 00:06:31.011 user 1m30.253s 00:06:31.011 sys 0m9.247s 00:06:31.011 20:47:41 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.011 20:47:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.012 20:47:41 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.012 20:47:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.012 20:47:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.012 20:47:41 -- common/autotest_common.sh@10 -- # set +x 00:06:31.269 ************************************ 00:06:31.269 START TEST thread 00:06:31.269 ************************************ 00:06:31.269 20:47:41 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:31.269 * Looking for test storage... 00:06:31.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:31.269 20:47:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.269 20:47:41 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:31.269 20:47:41 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.269 20:47:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.269 ************************************ 00:06:31.269 START TEST thread_poller_perf 00:06:31.269 ************************************ 00:06:31.269 20:47:41 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.269 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:31.269 [2024-08-11 20:47:41.894496] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:31.269 [2024-08-11 20:47:41.894816] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69932 ] 00:06:31.269 [2024-08-11 20:47:42.023209] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.528 [2024-08-11 20:47:42.085555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.528 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.462 ====================================== 00:06:32.462 busy:2206835485 (cyc) 00:06:32.462 total_run_count: 387000 00:06:32.462 tsc_hz: 2200000000 (cyc) 00:06:32.462 ====================================== 00:06:32.462 poller_cost: 5702 (cyc), 2591 (nsec) 00:06:32.462 00:06:32.462 real 0m1.270s 00:06:32.462 user 0m1.114s 00:06:32.462 sys 0m0.050s 00:06:32.462 20:47:43 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.462 20:47:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.462 ************************************ 00:06:32.462 END TEST thread_poller_perf 00:06:32.462 ************************************ 00:06:32.462 20:47:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.462 20:47:43 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:32.462 20:47:43 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.462 20:47:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.462 ************************************ 00:06:32.462 START TEST thread_poller_perf 00:06:32.462 ************************************ 00:06:32.462 20:47:43 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.462 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:32.462 [2024-08-11 20:47:43.223597] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:32.462 [2024-08-11 20:47:43.223677] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69962 ] 00:06:32.720 [2024-08-11 20:47:43.349868] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.720 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.720 [2024-08-11 20:47:43.409124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.094 ====================================== 00:06:34.094 busy:2202248220 (cyc) 00:06:34.094 total_run_count: 4950000 00:06:34.094 tsc_hz: 2200000000 (cyc) 00:06:34.094 ====================================== 00:06:34.094 poller_cost: 444 (cyc), 201 (nsec) 00:06:34.094 ************************************ 00:06:34.094 END TEST thread_poller_perf 00:06:34.094 ************************************ 00:06:34.094 00:06:34.094 real 0m1.281s 00:06:34.094 user 0m1.126s 00:06:34.094 sys 0m0.047s 00:06:34.094 20:47:44 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.094 20:47:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.094 20:47:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:34.094 ************************************ 00:06:34.094 END TEST thread 00:06:34.094 ************************************ 00:06:34.094 00:06:34.094 real 0m2.738s 00:06:34.094 user 0m2.307s 00:06:34.094 sys 0m0.212s 00:06:34.094 20:47:44 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.094 20:47:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.094 20:47:44 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:34.094 20:47:44 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.094 20:47:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.094 20:47:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.094 20:47:44 -- common/autotest_common.sh@10 -- # set +x 00:06:34.094 ************************************ 00:06:34.094 START TEST app_cmdline 00:06:34.094 ************************************ 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:34.094 * Looking for test storage... 00:06:34.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:34.094 20:47:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.094 20:47:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=70037 00:06:34.094 20:47:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 70037 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 70037 ']' 00:06:34.094 20:47:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:34.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:34.094 20:47:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.094 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:34.094 [2024-08-11 20:47:44.736315] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:34.095 [2024-08-11 20:47:44.736425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70037 ] 00:06:34.095 [2024-08-11 20:47:44.866738] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.351 [2024-08-11 20:47:44.931866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.351 [2024-08-11 20:47:44.986695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.609 20:47:45 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:34.609 20:47:45 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:34.609 20:47:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:34.913 { 00:06:34.913 "version": "SPDK v24.09-pre git sha1 227b8322c", 00:06:34.913 "fields": { 00:06:34.913 "major": 24, 00:06:34.913 "minor": 9, 00:06:34.913 "patch": 0, 00:06:34.913 "suffix": "-pre", 00:06:34.913 "commit": "227b8322c" 00:06:34.913 } 00:06:34.913 } 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@557 -- # xtrace_disable 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:34.913 20:47:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@646 -- # local es=0 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:34.913 20:47:45 app_cmdline -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.176 request: 00:06:35.176 { 00:06:35.176 "method": "env_dpdk_get_mem_stats", 00:06:35.176 "req_id": 1 00:06:35.176 } 00:06:35.176 Got JSON-RPC error response 00:06:35.176 response: 00:06:35.176 { 00:06:35.176 "code": -32601, 00:06:35.176 "message": "Method not found" 00:06:35.176 } 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@649 -- # es=1 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:35.176 20:47:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 70037 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 70037 ']' 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 70037 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:35.176 20:47:45 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70037 00:06:35.177 killing process with pid 70037 00:06:35.177 20:47:45 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:35.177 20:47:45 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:35.177 20:47:45 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70037' 00:06:35.177 20:47:45 app_cmdline -- common/autotest_common.sh@965 -- # kill 70037 00:06:35.177 20:47:45 app_cmdline -- common/autotest_common.sh@970 -- # wait 70037 00:06:35.742 00:06:35.743 real 0m1.640s 00:06:35.743 user 0m2.022s 00:06:35.743 sys 0m0.451s 00:06:35.743 ************************************ 00:06:35.743 END TEST app_cmdline 00:06:35.743 ************************************ 00:06:35.743 20:47:46 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.743 20:47:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 20:47:46 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.743 20:47:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.743 20:47:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.743 20:47:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 ************************************ 00:06:35.743 START TEST version 00:06:35.743 ************************************ 00:06:35.743 20:47:46 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.743 * Looking for test storage... 00:06:35.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.743 20:47:46 version -- app/version.sh@17 -- # get_header_version major 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # cut -f2 00:06:35.743 20:47:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.743 20:47:46 version -- app/version.sh@17 -- # major=24 00:06:35.743 20:47:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # cut -f2 00:06:35.743 20:47:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.743 20:47:46 version -- app/version.sh@18 -- # minor=9 00:06:35.743 20:47:46 version -- app/version.sh@19 -- # get_header_version patch 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # cut -f2 00:06:35.743 20:47:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.743 20:47:46 version -- app/version.sh@19 -- # patch=0 00:06:35.743 20:47:46 version -- app/version.sh@20 -- # get_header_version suffix 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # cut -f2 00:06:35.743 20:47:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.743 20:47:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.743 20:47:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:35.743 20:47:46 version -- app/version.sh@22 -- # version=24.9 00:06:35.743 20:47:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:35.743 20:47:46 version -- app/version.sh@28 -- # version=24.9rc0 00:06:35.743 20:47:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:35.743 20:47:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:35.743 20:47:46 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:35.743 20:47:46 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:35.743 00:06:35.743 real 0m0.163s 00:06:35.743 user 0m0.093s 00:06:35.743 sys 0m0.101s 00:06:35.743 20:47:46 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.743 20:47:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 ************************************ 00:06:35.743 END TEST version 00:06:35.743 ************************************ 00:06:35.743 20:47:46 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:35.743 20:47:46 -- spdk/autotest.sh@201 -- # [[ 0 -eq 1 ]] 00:06:35.743 20:47:46 -- spdk/autotest.sh@207 -- # uname -s 00:06:35.743 20:47:46 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:06:35.743 20:47:46 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:06:35.743 20:47:46 -- spdk/autotest.sh@208 -- # [[ 1 -eq 1 ]] 00:06:35.743 20:47:46 -- spdk/autotest.sh@214 -- # [[ 0 -eq 0 ]] 00:06:35.743 20:47:46 -- spdk/autotest.sh@215 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:35.743 20:47:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:35.743 20:47:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.743 20:47:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 ************************************ 00:06:35.743 START TEST spdk_dd 00:06:35.743 ************************************ 00:06:35.743 20:47:46 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:36.001 * Looking for test storage... 00:06:36.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.001 20:47:46 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.001 20:47:46 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.001 20:47:46 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.001 20:47:46 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.001 20:47:46 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.001 20:47:46 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.001 20:47:46 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.001 20:47:46 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:36.001 20:47:46 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.001 20:47:46 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:36.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.259 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:36.259 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:36.259 20:47:47 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:36.259 20:47:47 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:36.259 20:47:47 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:36.260 20:47:47 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:36.260 20:47:47 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:36.260 20:47:47 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:36.260 20:47:47 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:36.260 20:47:47 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:36.519 20:47:47 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:36.519 20:47:47 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:36.519 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:06:36.520 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:36.521 * spdk_dd linked to liburing 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:36.521 20:47:47 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:36.521 20:47:47 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:36.522 20:47:47 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:36.522 20:47:47 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:36.522 20:47:47 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:36.522 20:47:47 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:36.522 20:47:47 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:36.522 20:47:47 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:36.522 20:47:47 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:36.522 20:47:47 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:36.522 20:47:47 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.522 20:47:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:36.522 ************************************ 00:06:36.522 START TEST spdk_dd_basic_rw 00:06:36.522 ************************************ 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:36.522 * Looking for test storage... 00:06:36.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:36.522 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:36.522 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:36.782 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:36.782 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.783 ************************************ 00:06:36.783 START TEST dd_bs_lt_native_bs 00:06:36.783 ************************************ 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # local es=0 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.783 20:47:47 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.783 { 00:06:36.783 "subsystems": [ 00:06:36.783 { 00:06:36.783 "subsystem": "bdev", 00:06:36.783 "config": [ 00:06:36.783 { 00:06:36.783 "params": { 00:06:36.783 "trtype": "pcie", 00:06:36.783 "traddr": "0000:00:10.0", 00:06:36.783 "name": "Nvme0" 00:06:36.783 }, 00:06:36.783 "method": "bdev_nvme_attach_controller" 00:06:36.783 }, 00:06:36.783 { 00:06:36.783 "method": "bdev_wait_for_examine" 00:06:36.783 } 00:06:36.783 ] 00:06:36.783 } 00:06:36.783 ] 00:06:36.783 } 00:06:36.783 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:36.783 [2024-08-11 20:47:47.445721] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:36.783 [2024-08-11 20:47:47.445806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70352 ] 00:06:37.041 [2024-08-11 20:47:47.578409] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.041 [2024-08-11 20:47:47.668007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.041 [2024-08-11 20:47:47.724450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.299 [2024-08-11 20:47:47.826707] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:37.299 [2024-08-11 20:47:47.826774] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.299 [2024-08-11 20:47:47.944173] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@649 -- # es=234 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@658 -- # es=106 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # case "$es" in 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@666 -- # es=1 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:37.299 00:06:37.299 real 0m0.631s 00:06:37.299 user 0m0.427s 00:06:37.299 sys 0m0.161s 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.299 ************************************ 00:06:37.299 END TEST dd_bs_lt_native_bs 00:06:37.299 ************************************ 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.299 ************************************ 00:06:37.299 START TEST dd_rw 00:06:37.299 ************************************ 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.299 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:37.557 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:38.124 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:38.124 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.124 20:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:38.124 [2024-08-11 20:47:48.731836] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:38.124 [2024-08-11 20:47:48.731944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70388 ] 00:06:38.124 { 00:06:38.124 "subsystems": [ 00:06:38.124 { 00:06:38.124 "subsystem": "bdev", 00:06:38.124 "config": [ 00:06:38.124 { 00:06:38.124 "params": { 00:06:38.124 "trtype": "pcie", 00:06:38.124 "traddr": "0000:00:10.0", 00:06:38.124 "name": "Nvme0" 00:06:38.124 }, 00:06:38.124 "method": "bdev_nvme_attach_controller" 00:06:38.124 }, 00:06:38.124 { 00:06:38.124 "method": "bdev_wait_for_examine" 00:06:38.124 } 00:06:38.124 ] 00:06:38.124 } 00:06:38.124 ] 00:06:38.124 } 00:06:38.124 [2024-08-11 20:47:48.870338] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.383 [2024-08-11 20:47:48.927230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.383 [2024-08-11 20:47:48.982681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.642  Copying: 60/60 [kB] (average 19 MBps) 00:06:38.642 00:06:38.642 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:38.642 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.642 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.642 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.642 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:38.642 [2024-08-11 20:47:49.320699] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:38.642 [2024-08-11 20:47:49.320800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70401 ] 00:06:38.642 { 00:06:38.642 "subsystems": [ 00:06:38.642 { 00:06:38.642 "subsystem": "bdev", 00:06:38.642 "config": [ 00:06:38.642 { 00:06:38.642 "params": { 00:06:38.642 "trtype": "pcie", 00:06:38.642 "traddr": "0000:00:10.0", 00:06:38.642 "name": "Nvme0" 00:06:38.642 }, 00:06:38.642 "method": "bdev_nvme_attach_controller" 00:06:38.642 }, 00:06:38.642 { 00:06:38.642 "method": "bdev_wait_for_examine" 00:06:38.642 } 00:06:38.642 ] 00:06:38.642 } 00:06:38.642 ] 00:06:38.642 } 00:06:38.901 [2024-08-11 20:47:49.457071] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.901 [2024-08-11 20:47:49.530541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.901 [2024-08-11 20:47:49.588134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.178  Copying: 60/60 [kB] (average 19 MBps) 00:06:39.178 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.178 20:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.437 { 00:06:39.437 "subsystems": [ 00:06:39.437 { 00:06:39.437 "subsystem": "bdev", 00:06:39.437 "config": [ 00:06:39.437 { 00:06:39.437 "params": { 00:06:39.437 "trtype": "pcie", 00:06:39.437 "traddr": "0000:00:10.0", 00:06:39.437 "name": "Nvme0" 00:06:39.437 }, 00:06:39.437 "method": "bdev_nvme_attach_controller" 00:06:39.437 }, 00:06:39.437 { 00:06:39.437 "method": "bdev_wait_for_examine" 00:06:39.437 } 00:06:39.437 ] 00:06:39.437 } 00:06:39.437 ] 00:06:39.437 } 00:06:39.437 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:39.437 [2024-08-11 20:47:49.968174] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:39.437 [2024-08-11 20:47:49.968284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70417 ] 00:06:39.437 [2024-08-11 20:47:50.106394] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.437 [2024-08-11 20:47:50.207468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.695 [2024-08-11 20:47:50.266812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.954  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:39.954 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:39.954 20:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.521 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:40.521 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.521 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.521 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.521 { 00:06:40.521 "subsystems": [ 00:06:40.521 { 00:06:40.521 "subsystem": "bdev", 00:06:40.521 "config": [ 00:06:40.521 { 00:06:40.521 "params": { 00:06:40.521 "trtype": "pcie", 00:06:40.521 "traddr": "0000:00:10.0", 00:06:40.521 "name": "Nvme0" 00:06:40.521 }, 00:06:40.521 "method": "bdev_nvme_attach_controller" 00:06:40.521 }, 00:06:40.521 { 00:06:40.521 "method": "bdev_wait_for_examine" 00:06:40.521 } 00:06:40.521 ] 00:06:40.521 } 00:06:40.521 ] 00:06:40.521 } 00:06:40.521 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:40.521 [2024-08-11 20:47:51.257047] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:40.521 [2024-08-11 20:47:51.257196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70443 ] 00:06:40.779 [2024-08-11 20:47:51.404169] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.779 [2024-08-11 20:47:51.511704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.037 [2024-08-11 20:47:51.568299] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.294  Copying: 60/60 [kB] (average 58 MBps) 00:06:41.294 00:06:41.294 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:41.294 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:41.294 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.294 20:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.294 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:41.294 [2024-08-11 20:47:51.934788] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:41.294 [2024-08-11 20:47:51.934932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70457 ] 00:06:41.294 { 00:06:41.294 "subsystems": [ 00:06:41.294 { 00:06:41.294 "subsystem": "bdev", 00:06:41.294 "config": [ 00:06:41.294 { 00:06:41.294 "params": { 00:06:41.294 "trtype": "pcie", 00:06:41.294 "traddr": "0000:00:10.0", 00:06:41.294 "name": "Nvme0" 00:06:41.294 }, 00:06:41.294 "method": "bdev_nvme_attach_controller" 00:06:41.294 }, 00:06:41.294 { 00:06:41.294 "method": "bdev_wait_for_examine" 00:06:41.294 } 00:06:41.294 ] 00:06:41.294 } 00:06:41.294 ] 00:06:41.294 } 00:06:41.552 [2024-08-11 20:47:52.073322] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.552 [2024-08-11 20:47:52.171342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.552 [2024-08-11 20:47:52.227352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.810  Copying: 60/60 [kB] (average 58 MBps) 00:06:41.810 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.810 20:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.068 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:42.068 [2024-08-11 20:47:52.597818] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:42.068 [2024-08-11 20:47:52.597904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70478 ] 00:06:42.068 { 00:06:42.068 "subsystems": [ 00:06:42.068 { 00:06:42.068 "subsystem": "bdev", 00:06:42.068 "config": [ 00:06:42.068 { 00:06:42.068 "params": { 00:06:42.068 "trtype": "pcie", 00:06:42.068 "traddr": "0000:00:10.0", 00:06:42.068 "name": "Nvme0" 00:06:42.068 }, 00:06:42.068 "method": "bdev_nvme_attach_controller" 00:06:42.068 }, 00:06:42.068 { 00:06:42.068 "method": "bdev_wait_for_examine" 00:06:42.068 } 00:06:42.068 ] 00:06:42.068 } 00:06:42.069 ] 00:06:42.069 } 00:06:42.069 [2024-08-11 20:47:52.730972] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.069 [2024-08-11 20:47:52.831504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.327 [2024-08-11 20:47:52.887514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.585  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:42.585 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:42.585 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.518 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:43.518 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:43.518 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.518 20:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.518 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:43.518 [2024-08-11 20:47:53.981823] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:43.518 [2024-08-11 20:47:53.981944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70497 ] 00:06:43.518 { 00:06:43.518 "subsystems": [ 00:06:43.518 { 00:06:43.518 "subsystem": "bdev", 00:06:43.518 "config": [ 00:06:43.518 { 00:06:43.518 "params": { 00:06:43.518 "trtype": "pcie", 00:06:43.518 "traddr": "0000:00:10.0", 00:06:43.518 "name": "Nvme0" 00:06:43.518 }, 00:06:43.518 "method": "bdev_nvme_attach_controller" 00:06:43.518 }, 00:06:43.518 { 00:06:43.518 "method": "bdev_wait_for_examine" 00:06:43.518 } 00:06:43.518 ] 00:06:43.518 } 00:06:43.518 ] 00:06:43.518 } 00:06:43.518 [2024-08-11 20:47:54.122765] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.518 [2024-08-11 20:47:54.222460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.518 [2024-08-11 20:47:54.284140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.034  Copying: 56/56 [kB] (average 54 MBps) 00:06:44.034 00:06:44.034 20:47:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:44.034 20:47:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:44.034 20:47:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.034 20:47:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.034 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:44.034 [2024-08-11 20:47:54.639023] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:44.034 [2024-08-11 20:47:54.639121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70516 ] 00:06:44.034 { 00:06:44.034 "subsystems": [ 00:06:44.034 { 00:06:44.034 "subsystem": "bdev", 00:06:44.034 "config": [ 00:06:44.034 { 00:06:44.034 "params": { 00:06:44.034 "trtype": "pcie", 00:06:44.034 "traddr": "0000:00:10.0", 00:06:44.034 "name": "Nvme0" 00:06:44.034 }, 00:06:44.034 "method": "bdev_nvme_attach_controller" 00:06:44.034 }, 00:06:44.034 { 00:06:44.034 "method": "bdev_wait_for_examine" 00:06:44.034 } 00:06:44.034 ] 00:06:44.034 } 00:06:44.034 ] 00:06:44.034 } 00:06:44.034 [2024-08-11 20:47:54.769216] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.292 [2024-08-11 20:47:54.855431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.292 [2024-08-11 20:47:54.908140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.550  Copying: 56/56 [kB] (average 27 MBps) 00:06:44.550 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.550 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.550 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:44.550 [2024-08-11 20:47:55.276481] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:44.550 [2024-08-11 20:47:55.276602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:06:44.550 { 00:06:44.550 "subsystems": [ 00:06:44.550 { 00:06:44.550 "subsystem": "bdev", 00:06:44.550 "config": [ 00:06:44.550 { 00:06:44.550 "params": { 00:06:44.550 "trtype": "pcie", 00:06:44.550 "traddr": "0000:00:10.0", 00:06:44.550 "name": "Nvme0" 00:06:44.550 }, 00:06:44.550 "method": "bdev_nvme_attach_controller" 00:06:44.550 }, 00:06:44.550 { 00:06:44.550 "method": "bdev_wait_for_examine" 00:06:44.550 } 00:06:44.550 ] 00:06:44.550 } 00:06:44.550 ] 00:06:44.550 } 00:06:44.808 [2024-08-11 20:47:55.418599] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.808 [2024-08-11 20:47:55.506711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.808 [2024-08-11 20:47:55.559066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.375  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.376 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:45.376 20:47:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.649 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:45.649 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.649 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.649 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.649 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:45.649 [2024-08-11 20:47:56.354054] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:45.649 [2024-08-11 20:47:56.354170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70545 ] 00:06:45.649 { 00:06:45.649 "subsystems": [ 00:06:45.649 { 00:06:45.649 "subsystem": "bdev", 00:06:45.649 "config": [ 00:06:45.649 { 00:06:45.649 "params": { 00:06:45.649 "trtype": "pcie", 00:06:45.649 "traddr": "0000:00:10.0", 00:06:45.649 "name": "Nvme0" 00:06:45.649 }, 00:06:45.649 "method": "bdev_nvme_attach_controller" 00:06:45.649 }, 00:06:45.649 { 00:06:45.649 "method": "bdev_wait_for_examine" 00:06:45.649 } 00:06:45.649 ] 00:06:45.649 } 00:06:45.649 ] 00:06:45.649 } 00:06:45.907 [2024-08-11 20:47:56.492567] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.907 [2024-08-11 20:47:56.578712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.907 [2024-08-11 20:47:56.635246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.165  Copying: 56/56 [kB] (average 54 MBps) 00:06:46.165 00:06:46.165 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:46.165 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:46.165 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.165 20:47:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.422 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:46.422 [2024-08-11 20:47:56.967376] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:46.422 [2024-08-11 20:47:56.967457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70564 ] 00:06:46.422 { 00:06:46.422 "subsystems": [ 00:06:46.422 { 00:06:46.422 "subsystem": "bdev", 00:06:46.422 "config": [ 00:06:46.422 { 00:06:46.422 "params": { 00:06:46.422 "trtype": "pcie", 00:06:46.422 "traddr": "0000:00:10.0", 00:06:46.422 "name": "Nvme0" 00:06:46.422 }, 00:06:46.422 "method": "bdev_nvme_attach_controller" 00:06:46.422 }, 00:06:46.422 { 00:06:46.422 "method": "bdev_wait_for_examine" 00:06:46.422 } 00:06:46.422 ] 00:06:46.422 } 00:06:46.422 ] 00:06:46.422 } 00:06:46.422 [2024-08-11 20:47:57.097752] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.422 [2024-08-11 20:47:57.190077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.681 [2024-08-11 20:47:57.241909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.939  Copying: 56/56 [kB] (average 54 MBps) 00:06:46.939 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.939 20:47:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.939 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:46.939 { 00:06:46.939 "subsystems": [ 00:06:46.939 { 00:06:46.939 "subsystem": "bdev", 00:06:46.939 "config": [ 00:06:46.939 { 00:06:46.939 "params": { 00:06:46.939 "trtype": "pcie", 00:06:46.939 "traddr": "0000:00:10.0", 00:06:46.939 "name": "Nvme0" 00:06:46.939 }, 00:06:46.939 "method": "bdev_nvme_attach_controller" 00:06:46.939 }, 00:06:46.939 { 00:06:46.939 "method": "bdev_wait_for_examine" 00:06:46.939 } 00:06:46.939 ] 00:06:46.939 } 00:06:46.939 ] 00:06:46.939 } 00:06:46.939 [2024-08-11 20:47:57.621948] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:46.939 [2024-08-11 20:47:57.622086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70579 ] 00:06:47.197 [2024-08-11 20:47:57.761008] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.197 [2024-08-11 20:47:57.847626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.197 [2024-08-11 20:47:57.900280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.455  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:47.455 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:47.455 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.020 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:48.020 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:48.020 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.020 20:47:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.020 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:48.020 [2024-08-11 20:47:58.658860] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:48.020 [2024-08-11 20:47:58.658962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70600 ] 00:06:48.020 { 00:06:48.020 "subsystems": [ 00:06:48.020 { 00:06:48.020 "subsystem": "bdev", 00:06:48.020 "config": [ 00:06:48.021 { 00:06:48.021 "params": { 00:06:48.021 "trtype": "pcie", 00:06:48.021 "traddr": "0000:00:10.0", 00:06:48.021 "name": "Nvme0" 00:06:48.021 }, 00:06:48.021 "method": "bdev_nvme_attach_controller" 00:06:48.021 }, 00:06:48.021 { 00:06:48.021 "method": "bdev_wait_for_examine" 00:06:48.021 } 00:06:48.021 ] 00:06:48.021 } 00:06:48.021 ] 00:06:48.021 } 00:06:48.021 [2024-08-11 20:47:58.798014] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.278 [2024-08-11 20:47:58.885686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.278 [2024-08-11 20:47:58.938707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.536  Copying: 48/48 [kB] (average 46 MBps) 00:06:48.536 00:06:48.537 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:48.537 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:48.537 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.537 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.537 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:48.537 [2024-08-11 20:47:59.301158] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:48.537 [2024-08-11 20:47:59.301266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70614 ] 00:06:48.537 { 00:06:48.537 "subsystems": [ 00:06:48.537 { 00:06:48.537 "subsystem": "bdev", 00:06:48.537 "config": [ 00:06:48.537 { 00:06:48.537 "params": { 00:06:48.537 "trtype": "pcie", 00:06:48.537 "traddr": "0000:00:10.0", 00:06:48.537 "name": "Nvme0" 00:06:48.537 }, 00:06:48.537 "method": "bdev_nvme_attach_controller" 00:06:48.537 }, 00:06:48.537 { 00:06:48.537 "method": "bdev_wait_for_examine" 00:06:48.537 } 00:06:48.537 ] 00:06:48.537 } 00:06:48.537 ] 00:06:48.537 } 00:06:48.795 [2024-08-11 20:47:59.430825] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.795 [2024-08-11 20:47:59.522639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.053 [2024-08-11 20:47:59.578917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.311  Copying: 48/48 [kB] (average 46 MBps) 00:06:49.311 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:49.311 20:47:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.311 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:49.311 [2024-08-11 20:47:59.952442] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:49.311 [2024-08-11 20:47:59.952551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70635 ] 00:06:49.311 { 00:06:49.311 "subsystems": [ 00:06:49.311 { 00:06:49.311 "subsystem": "bdev", 00:06:49.311 "config": [ 00:06:49.311 { 00:06:49.311 "params": { 00:06:49.311 "trtype": "pcie", 00:06:49.311 "traddr": "0000:00:10.0", 00:06:49.311 "name": "Nvme0" 00:06:49.311 }, 00:06:49.311 "method": "bdev_nvme_attach_controller" 00:06:49.311 }, 00:06:49.311 { 00:06:49.311 "method": "bdev_wait_for_examine" 00:06:49.311 } 00:06:49.311 ] 00:06:49.311 } 00:06:49.311 ] 00:06:49.311 } 00:06:49.568 [2024-08-11 20:48:00.088815] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.568 [2024-08-11 20:48:00.176170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.568 [2024-08-11 20:48:00.231797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.826  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.826 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:49.826 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.392 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:50.392 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:50.392 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.392 20:48:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.392 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:50.392 [2024-08-11 20:48:01.020509] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:50.392 [2024-08-11 20:48:01.020955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70654 ] 00:06:50.392 { 00:06:50.392 "subsystems": [ 00:06:50.392 { 00:06:50.392 "subsystem": "bdev", 00:06:50.392 "config": [ 00:06:50.392 { 00:06:50.392 "params": { 00:06:50.392 "trtype": "pcie", 00:06:50.392 "traddr": "0000:00:10.0", 00:06:50.392 "name": "Nvme0" 00:06:50.392 }, 00:06:50.392 "method": "bdev_nvme_attach_controller" 00:06:50.392 }, 00:06:50.392 { 00:06:50.392 "method": "bdev_wait_for_examine" 00:06:50.392 } 00:06:50.392 ] 00:06:50.392 } 00:06:50.392 ] 00:06:50.392 } 00:06:50.392 [2024-08-11 20:48:01.157438] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.650 [2024-08-11 20:48:01.246849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.650 [2024-08-11 20:48:01.299570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.907  Copying: 48/48 [kB] (average 46 MBps) 00:06:50.907 00:06:50.907 20:48:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:50.907 20:48:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:50.907 20:48:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.907 20:48:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.907 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:50.907 [2024-08-11 20:48:01.642985] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:50.907 [2024-08-11 20:48:01.643100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70663 ] 00:06:50.907 { 00:06:50.907 "subsystems": [ 00:06:50.907 { 00:06:50.907 "subsystem": "bdev", 00:06:50.907 "config": [ 00:06:50.907 { 00:06:50.907 "params": { 00:06:50.907 "trtype": "pcie", 00:06:50.907 "traddr": "0000:00:10.0", 00:06:50.907 "name": "Nvme0" 00:06:50.907 }, 00:06:50.907 "method": "bdev_nvme_attach_controller" 00:06:50.907 }, 00:06:50.907 { 00:06:50.907 "method": "bdev_wait_for_examine" 00:06:50.907 } 00:06:50.907 ] 00:06:50.907 } 00:06:50.907 ] 00:06:50.907 } 00:06:51.165 [2024-08-11 20:48:01.771956] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.165 [2024-08-11 20:48:01.841472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.165 [2024-08-11 20:48:01.893188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.423  Copying: 48/48 [kB] (average 46 MBps) 00:06:51.423 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:51.423 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.681 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:51.681 [2024-08-11 20:48:02.237489] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:51.681 [2024-08-11 20:48:02.237776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70683 ] 00:06:51.681 { 00:06:51.681 "subsystems": [ 00:06:51.681 { 00:06:51.681 "subsystem": "bdev", 00:06:51.681 "config": [ 00:06:51.681 { 00:06:51.681 "params": { 00:06:51.681 "trtype": "pcie", 00:06:51.681 "traddr": "0000:00:10.0", 00:06:51.681 "name": "Nvme0" 00:06:51.681 }, 00:06:51.681 "method": "bdev_nvme_attach_controller" 00:06:51.681 }, 00:06:51.681 { 00:06:51.681 "method": "bdev_wait_for_examine" 00:06:51.681 } 00:06:51.681 ] 00:06:51.681 } 00:06:51.681 ] 00:06:51.681 } 00:06:51.681 [2024-08-11 20:48:02.374403] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.681 [2024-08-11 20:48:02.427535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.939 [2024-08-11 20:48:02.479231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.197  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:52.197 00:06:52.197 ************************************ 00:06:52.197 END TEST dd_rw 00:06:52.197 ************************************ 00:06:52.197 00:06:52.197 real 0m14.709s 00:06:52.197 user 0m10.753s 00:06:52.197 sys 0m5.450s 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:52.197 ************************************ 00:06:52.197 START TEST dd_rw_offset 00:06:52.197 ************************************ 00:06:52.197 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=spdorbe3na7o2f00431lqlwhhqqtgi7e1szyp93rcz971g617vi3nek4tbj04fgbctzhi1lzkd15n6c509y50imhymmu6hvzy87etruvqy6497fgsgnhwcfx39j4jl7y9qh9yn7w7eousxg1ll5am6d3mjkpvywbkl2bs6t7b2v8ysac2ysy3raq69ej75ru3mlzecy9hwmylxnwb3lfnzooccw98y7y6pdm09yppsgz22co9ka2rxgzqsjyfmiw275g404w4th1fy9ja2yv2dictkcd10im7jmfwq80zyrfebikecs4umqupfxwqa2x1dus41w8ta2rwg7sqk0wm37g7aaw8uhu8mvynohuej3wae9seaeu77nxll2x81t6rvpsvivl2bcckw55qtdpm3ogz2hq3g6uqevxqequ6cp896ge12wgi07o2i0x9eyofy07dsxoarkmwrl70xsjc3v8lq12jmf8vkamy84rvr87uwrpjrkojkbyhv59pza4em3glxvmla58memvymdecubzj6rvvbqzpwffnxceh1ia72fz656k9cwqb3t62vybw66ugs07vzb3wrh4q81r8j0o91av6rc4ujqp49ixd4tiyi4orlzlo4l0m8jmkitgfxqszm74b8pr0wx1p24w0u8nerf4pdyldiu4vkynut7kyn69j53f774h4z2krzsqmt50hic4fhh1r8r7w0w6sw0p6is0oe745mccstvyzs98poz88cofpffhdb05x82wej9ov3si9ux5tr3j97cgvx38yawgpj2wrhatplm5eq9jlsu5h7cqn3iw1oikb373tp4l5abpd4n6al24sagzo8k2wbdnxj5jmhf4iot87bumrejgp1hp6gy2ovama7wqxkbb51t2nn4wo0xuv0h7m64hfdcdakayhve6s15169jatan2dpzrnylc20ub9usqaan6jwrcfwaf99son6wncqe7z1ezy6ds7c7ydcpctusv4nkbnqei15qc0c1smg2et7ch6k8crijswtnhenf0xmtse0qh51l2s6qc8shv301ffqfa7p5zmcq4xucy7fa9u2qv1x1nac5y9spx9a76fubwfap30jyvnkx6qarip6mbnh0g107pkn7tc5ebjpd44wvx8e6pmsjfoajemud3ixo3k2f16to6sgp0ybydpqhb1lzgi9h1gn4ctjphum66oz40izdap15h0a4hx0lf33ha93zyfdgwfv4vfyxle8oam6hza3fy4q9ku8ubcr88hkqornqlhumliv06cqbdfwfpzi3vw6hcnd0tml4wgmyoclpd2x3jsjm3asqz6jtmeptewndy952hkt1nz4cr9r9rbessrol9p0ayr1kvovms9g486fysofw4vjvg37irr1etpfm5yukq3prtbymgi60f6ss9tkpxzeiv00n703foo9bfc60t8utmsm6e3lh5lbz373rj3pjzhwydjayafhhhammt3p8i7ai32n2gjry1hmj5eth8i33729xw6if7qwjhcqgclgnjfdb48fv5cvp0sra25a05eeedz5fk52wcu623o65k6wincs6zp39cx9yiendqpi2grfz0s1saryf3x61ab1wlec1t15aqhyhzzx6fq7ym1h6yk9kam6dsobsl148rmjkqvzu0qcy70rop46p9h5wrbdsf07yb1pd7gxbauksfrda7vq78m19p3zngjouwg7fpve9ln0kvrr0b3cs6br820y2to0xnkqkvvu9g0yqi8ekzyurqpbme1xh64uaaw6baqjvkrvidqhg19hmy50yisswni5afhl6bm4676vp0iv7fu1t005zgirhoplmh0d9og20v2209p8mzhi66i6jj34ogbcgcra5b14bnmr475agm49fuz23l8wwry4tvwplpjns81dlscesjxmyoik885tgkeakq57eo62tdftc8pap4vab2map22ywwvvdi4yyt4dpxo4rciimth7cwn2ltqmyh2jgxcvdzajsd47ino5gzjv3h572ohuvy0em1pkqlscmdvb1qbdfuh0dwq9pxouy30q7o6ycqrenqomsxppixfa8gh60mi9xwylvg90hwiylni05anc9ygeghv5f14v2vcuzxzgy1jiwmwnqm3n7n674dtzip7wui8jnebaby1dks10xy8msiho9rftnk34gdxwkqa5oyh10iizx8m7115js47fbohlwqsg3tca5smflsgl0sfesb30yc2oje6puvgfbg8f10146sjce1ce3sai5oculew4xurssn6epwrsau9waytwn7nkw0bytl1u7ta5hdvqgyjrvqo4wbg3gag6nelu7uscsz4r1wp29z4b1ofauurnwn3d1ayt0jvpx78m51hzsll984z7z0uo33tuu2p0xc6mykmrg9w0w5w1zn3vhfck7poiajtxvolatq5hd42t5n5y30hd94wlrk8gyigpx3o2vcbsioidtowfts9301cfmqudhhxdx6j6hjfffqlvt7lz83yh0b7emtmzin1p9868pg4eu0yprm8qwyhndfje3cze3cl850h4c3pc33zmu8mlm97oml5jcbghdz4am1crba31674dlmzep1mrrv1no308vhggaizm2e20k7a34qgyi572hun6o42cwbau3xb7jxyamn4h189z88qvrj8ii8gihdfbjcc07ppwrjcx2kekjivlfy9x8mw7gfzasy5v1ksggzsbe8tsc8vno2a3jdtxc9pen16vyidw3qmndly4b0j98ocmvjd94tcu1983biaoisnd3hzcgdmdhqqr95xeg90cdibou7kkr9x21hhuwocx8z3tfc27zybhfk80be9fuu4rc4vagg6dgvg3hb9f1mfdwynvfcglco8ps333oje9wlcbgzveuj8t3u3vcq1it2yss4159ayfo5nqgmrn6lm8yxrb8vvdtvp7hn51qc76dtv68bhfsuhjjni5crohkh8mfwyi5d77hrz5z6znfj4zguzzkek9hpgwr2oiw5jma77hzl3l276gagr0nzijj95pro5b600403nejdzb5jt7tkgspf9fggilih952ugrnicwuoe5fvze3m2onqofn801anku6awrn0ay8i5cg22etzw69m83rc5oepestvffqw2jn3lbgulsw91ev6sts6d52dojtiq0v1013z1rqtqxiznblqy43k54j0df3ze6bnct1evojczsbjrydx542jjfijstr3maqu05udsulnvsatslhgrc5xrg5zmsvw0vztrmdgknqszv2eivwvzeuoch8ab9gb3jsjewee47l2lldb1u4nyiw0ho9o2pnwmceo750n3f2thnxavj5tqkcufxocgid3zgmlupcsz7f9nzd6mjfjti3zky80izvkx6oczbmni0m8cc45ijiaq4c0vre70rmnuwj72ueb9xhbueoq5ki2brwwzmzc3l4etgua29jv2452z4r2rjhzp5zdv1hljj16cg5jorl5vzvbylvgig3jpnpgnv5dru3nxjv0j2320hak7ffb87fi9l7fsg1t83g7qzab991of90k5nkri9ut9ieaud5lu2iqbdtzrlfrdgchghc401b0erdkokhibio2ib2eg3yf2fmb0i47s0310ueirrs9haehcbd9tcmy6riblopg5a9ahybxaf7cromhp2czb6nhlunhjvly9tadecc7iqzy07cksjm22n1ixqulmcyprqlrd76wur0l545vu4lay1cvna5860kwy3y802dwoqz8aec4woc2gnj33nfwdric1s0b56iqcqgz1l9zpepj7s9o9dpc6ewc4xld56yn016jg44uchz31ltlfgo1go518xukg0g18xfkt90tiu6o7xx7ynmkbrzoag3nw8z4clg2u56miwcf0c5dsdm58bat0338tce79vwkbeqwl7j9uq1e3hbvc18n7yyca29k7ahuk7noqjkebw97e8uempoa6yjgz3uvbqqjvxnwsuoragcmuca9ybf7bqr0l0su3fu0j1ctm9291jw3ip1nh2vj 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:52.198 20:48:02 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:52.198 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:52.198 [2024-08-11 20:48:02.943881] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:52.198 [2024-08-11 20:48:02.944007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70719 ] 00:06:52.198 { 00:06:52.198 "subsystems": [ 00:06:52.198 { 00:06:52.198 "subsystem": "bdev", 00:06:52.198 "config": [ 00:06:52.198 { 00:06:52.198 "params": { 00:06:52.198 "trtype": "pcie", 00:06:52.198 "traddr": "0000:00:10.0", 00:06:52.198 "name": "Nvme0" 00:06:52.198 }, 00:06:52.198 "method": "bdev_nvme_attach_controller" 00:06:52.198 }, 00:06:52.198 { 00:06:52.198 "method": "bdev_wait_for_examine" 00:06:52.198 } 00:06:52.198 ] 00:06:52.198 } 00:06:52.198 ] 00:06:52.198 } 00:06:52.456 [2024-08-11 20:48:03.086319] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.456 [2024-08-11 20:48:03.137884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.456 [2024-08-11 20:48:03.192587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.714  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:52.714 00:06:52.714 20:48:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:52.714 20:48:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:52.714 20:48:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:52.714 20:48:03 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:52.971 { 00:06:52.971 "subsystems": [ 00:06:52.971 { 00:06:52.971 "subsystem": "bdev", 00:06:52.971 "config": [ 00:06:52.971 { 00:06:52.971 "params": { 00:06:52.971 "trtype": "pcie", 00:06:52.971 "traddr": "0000:00:10.0", 00:06:52.971 "name": "Nvme0" 00:06:52.971 }, 00:06:52.971 "method": "bdev_nvme_attach_controller" 00:06:52.971 }, 00:06:52.971 { 00:06:52.971 "method": "bdev_wait_for_examine" 00:06:52.971 } 00:06:52.971 ] 00:06:52.971 } 00:06:52.971 ] 00:06:52.971 } 00:06:52.971 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:52.971 [2024-08-11 20:48:03.536425] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:52.971 [2024-08-11 20:48:03.536684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70727 ] 00:06:52.971 [2024-08-11 20:48:03.674221] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.971 [2024-08-11 20:48:03.739838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.229 [2024-08-11 20:48:03.798465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.488  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:53.488 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ spdorbe3na7o2f00431lqlwhhqqtgi7e1szyp93rcz971g617vi3nek4tbj04fgbctzhi1lzkd15n6c509y50imhymmu6hvzy87etruvqy6497fgsgnhwcfx39j4jl7y9qh9yn7w7eousxg1ll5am6d3mjkpvywbkl2bs6t7b2v8ysac2ysy3raq69ej75ru3mlzecy9hwmylxnwb3lfnzooccw98y7y6pdm09yppsgz22co9ka2rxgzqsjyfmiw275g404w4th1fy9ja2yv2dictkcd10im7jmfwq80zyrfebikecs4umqupfxwqa2x1dus41w8ta2rwg7sqk0wm37g7aaw8uhu8mvynohuej3wae9seaeu77nxll2x81t6rvpsvivl2bcckw55qtdpm3ogz2hq3g6uqevxqequ6cp896ge12wgi07o2i0x9eyofy07dsxoarkmwrl70xsjc3v8lq12jmf8vkamy84rvr87uwrpjrkojkbyhv59pza4em3glxvmla58memvymdecubzj6rvvbqzpwffnxceh1ia72fz656k9cwqb3t62vybw66ugs07vzb3wrh4q81r8j0o91av6rc4ujqp49ixd4tiyi4orlzlo4l0m8jmkitgfxqszm74b8pr0wx1p24w0u8nerf4pdyldiu4vkynut7kyn69j53f774h4z2krzsqmt50hic4fhh1r8r7w0w6sw0p6is0oe745mccstvyzs98poz88cofpffhdb05x82wej9ov3si9ux5tr3j97cgvx38yawgpj2wrhatplm5eq9jlsu5h7cqn3iw1oikb373tp4l5abpd4n6al24sagzo8k2wbdnxj5jmhf4iot87bumrejgp1hp6gy2ovama7wqxkbb51t2nn4wo0xuv0h7m64hfdcdakayhve6s15169jatan2dpzrnylc20ub9usqaan6jwrcfwaf99son6wncqe7z1ezy6ds7c7ydcpctusv4nkbnqei15qc0c1smg2et7ch6k8crijswtnhenf0xmtse0qh51l2s6qc8shv301ffqfa7p5zmcq4xucy7fa9u2qv1x1nac5y9spx9a76fubwfap30jyvnkx6qarip6mbnh0g107pkn7tc5ebjpd44wvx8e6pmsjfoajemud3ixo3k2f16to6sgp0ybydpqhb1lzgi9h1gn4ctjphum66oz40izdap15h0a4hx0lf33ha93zyfdgwfv4vfyxle8oam6hza3fy4q9ku8ubcr88hkqornqlhumliv06cqbdfwfpzi3vw6hcnd0tml4wgmyoclpd2x3jsjm3asqz6jtmeptewndy952hkt1nz4cr9r9rbessrol9p0ayr1kvovms9g486fysofw4vjvg37irr1etpfm5yukq3prtbymgi60f6ss9tkpxzeiv00n703foo9bfc60t8utmsm6e3lh5lbz373rj3pjzhwydjayafhhhammt3p8i7ai32n2gjry1hmj5eth8i33729xw6if7qwjhcqgclgnjfdb48fv5cvp0sra25a05eeedz5fk52wcu623o65k6wincs6zp39cx9yiendqpi2grfz0s1saryf3x61ab1wlec1t15aqhyhzzx6fq7ym1h6yk9kam6dsobsl148rmjkqvzu0qcy70rop46p9h5wrbdsf07yb1pd7gxbauksfrda7vq78m19p3zngjouwg7fpve9ln0kvrr0b3cs6br820y2to0xnkqkvvu9g0yqi8ekzyurqpbme1xh64uaaw6baqjvkrvidqhg19hmy50yisswni5afhl6bm4676vp0iv7fu1t005zgirhoplmh0d9og20v2209p8mzhi66i6jj34ogbcgcra5b14bnmr475agm49fuz23l8wwry4tvwplpjns81dlscesjxmyoik885tgkeakq57eo62tdftc8pap4vab2map22ywwvvdi4yyt4dpxo4rciimth7cwn2ltqmyh2jgxcvdzajsd47ino5gzjv3h572ohuvy0em1pkqlscmdvb1qbdfuh0dwq9pxouy30q7o6ycqrenqomsxppixfa8gh60mi9xwylvg90hwiylni05anc9ygeghv5f14v2vcuzxzgy1jiwmwnqm3n7n674dtzip7wui8jnebaby1dks10xy8msiho9rftnk34gdxwkqa5oyh10iizx8m7115js47fbohlwqsg3tca5smflsgl0sfesb30yc2oje6puvgfbg8f10146sjce1ce3sai5oculew4xurssn6epwrsau9waytwn7nkw0bytl1u7ta5hdvqgyjrvqo4wbg3gag6nelu7uscsz4r1wp29z4b1ofauurnwn3d1ayt0jvpx78m51hzsll984z7z0uo33tuu2p0xc6mykmrg9w0w5w1zn3vhfck7poiajtxvolatq5hd42t5n5y30hd94wlrk8gyigpx3o2vcbsioidtowfts9301cfmqudhhxdx6j6hjfffqlvt7lz83yh0b7emtmzin1p9868pg4eu0yprm8qwyhndfje3cze3cl850h4c3pc33zmu8mlm97oml5jcbghdz4am1crba31674dlmzep1mrrv1no308vhggaizm2e20k7a34qgyi572hun6o42cwbau3xb7jxyamn4h189z88qvrj8ii8gihdfbjcc07ppwrjcx2kekjivlfy9x8mw7gfzasy5v1ksggzsbe8tsc8vno2a3jdtxc9pen16vyidw3qmndly4b0j98ocmvjd94tcu1983biaoisnd3hzcgdmdhqqr95xeg90cdibou7kkr9x21hhuwocx8z3tfc27zybhfk80be9fuu4rc4vagg6dgvg3hb9f1mfdwynvfcglco8ps333oje9wlcbgzveuj8t3u3vcq1it2yss4159ayfo5nqgmrn6lm8yxrb8vvdtvp7hn51qc76dtv68bhfsuhjjni5crohkh8mfwyi5d77hrz5z6znfj4zguzzkek9hpgwr2oiw5jma77hzl3l276gagr0nzijj95pro5b600403nejdzb5jt7tkgspf9fggilih952ugrnicwuoe5fvze3m2onqofn801anku6awrn0ay8i5cg22etzw69m83rc5oepestvffqw2jn3lbgulsw91ev6sts6d52dojtiq0v1013z1rqtqxiznblqy43k54j0df3ze6bnct1evojczsbjrydx542jjfijstr3maqu05udsulnvsatslhgrc5xrg5zmsvw0vztrmdgknqszv2eivwvzeuoch8ab9gb3jsjewee47l2lldb1u4nyiw0ho9o2pnwmceo750n3f2thnxavj5tqkcufxocgid3zgmlupcsz7f9nzd6mjfjti3zky80izvkx6oczbmni0m8cc45ijiaq4c0vre70rmnuwj72ueb9xhbueoq5ki2brwwzmzc3l4etgua29jv2452z4r2rjhzp5zdv1hljj16cg5jorl5vzvbylvgig3jpnpgnv5dru3nxjv0j2320hak7ffb87fi9l7fsg1t83g7qzab991of90k5nkri9ut9ieaud5lu2iqbdtzrlfrdgchghc401b0erdkokhibio2ib2eg3yf2fmb0i47s0310ueirrs9haehcbd9tcmy6riblopg5a9ahybxaf7cromhp2czb6nhlunhjvly9tadecc7iqzy07cksjm22n1ixqulmcyprqlrd76wur0l545vu4lay1cvna5860kwy3y802dwoqz8aec4woc2gnj33nfwdric1s0b56iqcqgz1l9zpepj7s9o9dpc6ewc4xld56yn016jg44uchz31ltlfgo1go518xukg0g18xfkt90tiu6o7xx7ynmkbrzoag3nw8z4clg2u56miwcf0c5dsdm58bat0338tce79vwkbeqwl7j9uq1e3hbvc18n7yyca29k7ahuk7noqjkebw97e8uempoa6yjgz3uvbqqjvxnwsuoragcmuca9ybf7bqr0l0su3fu0j1ctm9291jw3ip1nh2vj == \s\p\d\o\r\b\e\3\n\a\7\o\2\f\0\0\4\3\1\l\q\l\w\h\h\q\q\t\g\i\7\e\1\s\z\y\p\9\3\r\c\z\9\7\1\g\6\1\7\v\i\3\n\e\k\4\t\b\j\0\4\f\g\b\c\t\z\h\i\1\l\z\k\d\1\5\n\6\c\5\0\9\y\5\0\i\m\h\y\m\m\u\6\h\v\z\y\8\7\e\t\r\u\v\q\y\6\4\9\7\f\g\s\g\n\h\w\c\f\x\3\9\j\4\j\l\7\y\9\q\h\9\y\n\7\w\7\e\o\u\s\x\g\1\l\l\5\a\m\6\d\3\m\j\k\p\v\y\w\b\k\l\2\b\s\6\t\7\b\2\v\8\y\s\a\c\2\y\s\y\3\r\a\q\6\9\e\j\7\5\r\u\3\m\l\z\e\c\y\9\h\w\m\y\l\x\n\w\b\3\l\f\n\z\o\o\c\c\w\9\8\y\7\y\6\p\d\m\0\9\y\p\p\s\g\z\2\2\c\o\9\k\a\2\r\x\g\z\q\s\j\y\f\m\i\w\2\7\5\g\4\0\4\w\4\t\h\1\f\y\9\j\a\2\y\v\2\d\i\c\t\k\c\d\1\0\i\m\7\j\m\f\w\q\8\0\z\y\r\f\e\b\i\k\e\c\s\4\u\m\q\u\p\f\x\w\q\a\2\x\1\d\u\s\4\1\w\8\t\a\2\r\w\g\7\s\q\k\0\w\m\3\7\g\7\a\a\w\8\u\h\u\8\m\v\y\n\o\h\u\e\j\3\w\a\e\9\s\e\a\e\u\7\7\n\x\l\l\2\x\8\1\t\6\r\v\p\s\v\i\v\l\2\b\c\c\k\w\5\5\q\t\d\p\m\3\o\g\z\2\h\q\3\g\6\u\q\e\v\x\q\e\q\u\6\c\p\8\9\6\g\e\1\2\w\g\i\0\7\o\2\i\0\x\9\e\y\o\f\y\0\7\d\s\x\o\a\r\k\m\w\r\l\7\0\x\s\j\c\3\v\8\l\q\1\2\j\m\f\8\v\k\a\m\y\8\4\r\v\r\8\7\u\w\r\p\j\r\k\o\j\k\b\y\h\v\5\9\p\z\a\4\e\m\3\g\l\x\v\m\l\a\5\8\m\e\m\v\y\m\d\e\c\u\b\z\j\6\r\v\v\b\q\z\p\w\f\f\n\x\c\e\h\1\i\a\7\2\f\z\6\5\6\k\9\c\w\q\b\3\t\6\2\v\y\b\w\6\6\u\g\s\0\7\v\z\b\3\w\r\h\4\q\8\1\r\8\j\0\o\9\1\a\v\6\r\c\4\u\j\q\p\4\9\i\x\d\4\t\i\y\i\4\o\r\l\z\l\o\4\l\0\m\8\j\m\k\i\t\g\f\x\q\s\z\m\7\4\b\8\p\r\0\w\x\1\p\2\4\w\0\u\8\n\e\r\f\4\p\d\y\l\d\i\u\4\v\k\y\n\u\t\7\k\y\n\6\9\j\5\3\f\7\7\4\h\4\z\2\k\r\z\s\q\m\t\5\0\h\i\c\4\f\h\h\1\r\8\r\7\w\0\w\6\s\w\0\p\6\i\s\0\o\e\7\4\5\m\c\c\s\t\v\y\z\s\9\8\p\o\z\8\8\c\o\f\p\f\f\h\d\b\0\5\x\8\2\w\e\j\9\o\v\3\s\i\9\u\x\5\t\r\3\j\9\7\c\g\v\x\3\8\y\a\w\g\p\j\2\w\r\h\a\t\p\l\m\5\e\q\9\j\l\s\u\5\h\7\c\q\n\3\i\w\1\o\i\k\b\3\7\3\t\p\4\l\5\a\b\p\d\4\n\6\a\l\2\4\s\a\g\z\o\8\k\2\w\b\d\n\x\j\5\j\m\h\f\4\i\o\t\8\7\b\u\m\r\e\j\g\p\1\h\p\6\g\y\2\o\v\a\m\a\7\w\q\x\k\b\b\5\1\t\2\n\n\4\w\o\0\x\u\v\0\h\7\m\6\4\h\f\d\c\d\a\k\a\y\h\v\e\6\s\1\5\1\6\9\j\a\t\a\n\2\d\p\z\r\n\y\l\c\2\0\u\b\9\u\s\q\a\a\n\6\j\w\r\c\f\w\a\f\9\9\s\o\n\6\w\n\c\q\e\7\z\1\e\z\y\6\d\s\7\c\7\y\d\c\p\c\t\u\s\v\4\n\k\b\n\q\e\i\1\5\q\c\0\c\1\s\m\g\2\e\t\7\c\h\6\k\8\c\r\i\j\s\w\t\n\h\e\n\f\0\x\m\t\s\e\0\q\h\5\1\l\2\s\6\q\c\8\s\h\v\3\0\1\f\f\q\f\a\7\p\5\z\m\c\q\4\x\u\c\y\7\f\a\9\u\2\q\v\1\x\1\n\a\c\5\y\9\s\p\x\9\a\7\6\f\u\b\w\f\a\p\3\0\j\y\v\n\k\x\6\q\a\r\i\p\6\m\b\n\h\0\g\1\0\7\p\k\n\7\t\c\5\e\b\j\p\d\4\4\w\v\x\8\e\6\p\m\s\j\f\o\a\j\e\m\u\d\3\i\x\o\3\k\2\f\1\6\t\o\6\s\g\p\0\y\b\y\d\p\q\h\b\1\l\z\g\i\9\h\1\g\n\4\c\t\j\p\h\u\m\6\6\o\z\4\0\i\z\d\a\p\1\5\h\0\a\4\h\x\0\l\f\3\3\h\a\9\3\z\y\f\d\g\w\f\v\4\v\f\y\x\l\e\8\o\a\m\6\h\z\a\3\f\y\4\q\9\k\u\8\u\b\c\r\8\8\h\k\q\o\r\n\q\l\h\u\m\l\i\v\0\6\c\q\b\d\f\w\f\p\z\i\3\v\w\6\h\c\n\d\0\t\m\l\4\w\g\m\y\o\c\l\p\d\2\x\3\j\s\j\m\3\a\s\q\z\6\j\t\m\e\p\t\e\w\n\d\y\9\5\2\h\k\t\1\n\z\4\c\r\9\r\9\r\b\e\s\s\r\o\l\9\p\0\a\y\r\1\k\v\o\v\m\s\9\g\4\8\6\f\y\s\o\f\w\4\v\j\v\g\3\7\i\r\r\1\e\t\p\f\m\5\y\u\k\q\3\p\r\t\b\y\m\g\i\6\0\f\6\s\s\9\t\k\p\x\z\e\i\v\0\0\n\7\0\3\f\o\o\9\b\f\c\6\0\t\8\u\t\m\s\m\6\e\3\l\h\5\l\b\z\3\7\3\r\j\3\p\j\z\h\w\y\d\j\a\y\a\f\h\h\h\a\m\m\t\3\p\8\i\7\a\i\3\2\n\2\g\j\r\y\1\h\m\j\5\e\t\h\8\i\3\3\7\2\9\x\w\6\i\f\7\q\w\j\h\c\q\g\c\l\g\n\j\f\d\b\4\8\f\v\5\c\v\p\0\s\r\a\2\5\a\0\5\e\e\e\d\z\5\f\k\5\2\w\c\u\6\2\3\o\6\5\k\6\w\i\n\c\s\6\z\p\3\9\c\x\9\y\i\e\n\d\q\p\i\2\g\r\f\z\0\s\1\s\a\r\y\f\3\x\6\1\a\b\1\w\l\e\c\1\t\1\5\a\q\h\y\h\z\z\x\6\f\q\7\y\m\1\h\6\y\k\9\k\a\m\6\d\s\o\b\s\l\1\4\8\r\m\j\k\q\v\z\u\0\q\c\y\7\0\r\o\p\4\6\p\9\h\5\w\r\b\d\s\f\0\7\y\b\1\p\d\7\g\x\b\a\u\k\s\f\r\d\a\7\v\q\7\8\m\1\9\p\3\z\n\g\j\o\u\w\g\7\f\p\v\e\9\l\n\0\k\v\r\r\0\b\3\c\s\6\b\r\8\2\0\y\2\t\o\0\x\n\k\q\k\v\v\u\9\g\0\y\q\i\8\e\k\z\y\u\r\q\p\b\m\e\1\x\h\6\4\u\a\a\w\6\b\a\q\j\v\k\r\v\i\d\q\h\g\1\9\h\m\y\5\0\y\i\s\s\w\n\i\5\a\f\h\l\6\b\m\4\6\7\6\v\p\0\i\v\7\f\u\1\t\0\0\5\z\g\i\r\h\o\p\l\m\h\0\d\9\o\g\2\0\v\2\2\0\9\p\8\m\z\h\i\6\6\i\6\j\j\3\4\o\g\b\c\g\c\r\a\5\b\1\4\b\n\m\r\4\7\5\a\g\m\4\9\f\u\z\2\3\l\8\w\w\r\y\4\t\v\w\p\l\p\j\n\s\8\1\d\l\s\c\e\s\j\x\m\y\o\i\k\8\8\5\t\g\k\e\a\k\q\5\7\e\o\6\2\t\d\f\t\c\8\p\a\p\4\v\a\b\2\m\a\p\2\2\y\w\w\v\v\d\i\4\y\y\t\4\d\p\x\o\4\r\c\i\i\m\t\h\7\c\w\n\2\l\t\q\m\y\h\2\j\g\x\c\v\d\z\a\j\s\d\4\7\i\n\o\5\g\z\j\v\3\h\5\7\2\o\h\u\v\y\0\e\m\1\p\k\q\l\s\c\m\d\v\b\1\q\b\d\f\u\h\0\d\w\q\9\p\x\o\u\y\3\0\q\7\o\6\y\c\q\r\e\n\q\o\m\s\x\p\p\i\x\f\a\8\g\h\6\0\m\i\9\x\w\y\l\v\g\9\0\h\w\i\y\l\n\i\0\5\a\n\c\9\y\g\e\g\h\v\5\f\1\4\v\2\v\c\u\z\x\z\g\y\1\j\i\w\m\w\n\q\m\3\n\7\n\6\7\4\d\t\z\i\p\7\w\u\i\8\j\n\e\b\a\b\y\1\d\k\s\1\0\x\y\8\m\s\i\h\o\9\r\f\t\n\k\3\4\g\d\x\w\k\q\a\5\o\y\h\1\0\i\i\z\x\8\m\7\1\1\5\j\s\4\7\f\b\o\h\l\w\q\s\g\3\t\c\a\5\s\m\f\l\s\g\l\0\s\f\e\s\b\3\0\y\c\2\o\j\e\6\p\u\v\g\f\b\g\8\f\1\0\1\4\6\s\j\c\e\1\c\e\3\s\a\i\5\o\c\u\l\e\w\4\x\u\r\s\s\n\6\e\p\w\r\s\a\u\9\w\a\y\t\w\n\7\n\k\w\0\b\y\t\l\1\u\7\t\a\5\h\d\v\q\g\y\j\r\v\q\o\4\w\b\g\3\g\a\g\6\n\e\l\u\7\u\s\c\s\z\4\r\1\w\p\2\9\z\4\b\1\o\f\a\u\u\r\n\w\n\3\d\1\a\y\t\0\j\v\p\x\7\8\m\5\1\h\z\s\l\l\9\8\4\z\7\z\0\u\o\3\3\t\u\u\2\p\0\x\c\6\m\y\k\m\r\g\9\w\0\w\5\w\1\z\n\3\v\h\f\c\k\7\p\o\i\a\j\t\x\v\o\l\a\t\q\5\h\d\4\2\t\5\n\5\y\3\0\h\d\9\4\w\l\r\k\8\g\y\i\g\p\x\3\o\2\v\c\b\s\i\o\i\d\t\o\w\f\t\s\9\3\0\1\c\f\m\q\u\d\h\h\x\d\x\6\j\6\h\j\f\f\f\q\l\v\t\7\l\z\8\3\y\h\0\b\7\e\m\t\m\z\i\n\1\p\9\8\6\8\p\g\4\e\u\0\y\p\r\m\8\q\w\y\h\n\d\f\j\e\3\c\z\e\3\c\l\8\5\0\h\4\c\3\p\c\3\3\z\m\u\8\m\l\m\9\7\o\m\l\5\j\c\b\g\h\d\z\4\a\m\1\c\r\b\a\3\1\6\7\4\d\l\m\z\e\p\1\m\r\r\v\1\n\o\3\0\8\v\h\g\g\a\i\z\m\2\e\2\0\k\7\a\3\4\q\g\y\i\5\7\2\h\u\n\6\o\4\2\c\w\b\a\u\3\x\b\7\j\x\y\a\m\n\4\h\1\8\9\z\8\8\q\v\r\j\8\i\i\8\g\i\h\d\f\b\j\c\c\0\7\p\p\w\r\j\c\x\2\k\e\k\j\i\v\l\f\y\9\x\8\m\w\7\g\f\z\a\s\y\5\v\1\k\s\g\g\z\s\b\e\8\t\s\c\8\v\n\o\2\a\3\j\d\t\x\c\9\p\e\n\1\6\v\y\i\d\w\3\q\m\n\d\l\y\4\b\0\j\9\8\o\c\m\v\j\d\9\4\t\c\u\1\9\8\3\b\i\a\o\i\s\n\d\3\h\z\c\g\d\m\d\h\q\q\r\9\5\x\e\g\9\0\c\d\i\b\o\u\7\k\k\r\9\x\2\1\h\h\u\w\o\c\x\8\z\3\t\f\c\2\7\z\y\b\h\f\k\8\0\b\e\9\f\u\u\4\r\c\4\v\a\g\g\6\d\g\v\g\3\h\b\9\f\1\m\f\d\w\y\n\v\f\c\g\l\c\o\8\p\s\3\3\3\o\j\e\9\w\l\c\b\g\z\v\e\u\j\8\t\3\u\3\v\c\q\1\i\t\2\y\s\s\4\1\5\9\a\y\f\o\5\n\q\g\m\r\n\6\l\m\8\y\x\r\b\8\v\v\d\t\v\p\7\h\n\5\1\q\c\7\6\d\t\v\6\8\b\h\f\s\u\h\j\j\n\i\5\c\r\o\h\k\h\8\m\f\w\y\i\5\d\7\7\h\r\z\5\z\6\z\n\f\j\4\z\g\u\z\z\k\e\k\9\h\p\g\w\r\2\o\i\w\5\j\m\a\7\7\h\z\l\3\l\2\7\6\g\a\g\r\0\n\z\i\j\j\9\5\p\r\o\5\b\6\0\0\4\0\3\n\e\j\d\z\b\5\j\t\7\t\k\g\s\p\f\9\f\g\g\i\l\i\h\9\5\2\u\g\r\n\i\c\w\u\o\e\5\f\v\z\e\3\m\2\o\n\q\o\f\n\8\0\1\a\n\k\u\6\a\w\r\n\0\a\y\8\i\5\c\g\2\2\e\t\z\w\6\9\m\8\3\r\c\5\o\e\p\e\s\t\v\f\f\q\w\2\j\n\3\l\b\g\u\l\s\w\9\1\e\v\6\s\t\s\6\d\5\2\d\o\j\t\i\q\0\v\1\0\1\3\z\1\r\q\t\q\x\i\z\n\b\l\q\y\4\3\k\5\4\j\0\d\f\3\z\e\6\b\n\c\t\1\e\v\o\j\c\z\s\b\j\r\y\d\x\5\4\2\j\j\f\i\j\s\t\r\3\m\a\q\u\0\5\u\d\s\u\l\n\v\s\a\t\s\l\h\g\r\c\5\x\r\g\5\z\m\s\v\w\0\v\z\t\r\m\d\g\k\n\q\s\z\v\2\e\i\v\w\v\z\e\u\o\c\h\8\a\b\9\g\b\3\j\s\j\e\w\e\e\4\7\l\2\l\l\d\b\1\u\4\n\y\i\w\0\h\o\9\o\2\p\n\w\m\c\e\o\7\5\0\n\3\f\2\t\h\n\x\a\v\j\5\t\q\k\c\u\f\x\o\c\g\i\d\3\z\g\m\l\u\p\c\s\z\7\f\9\n\z\d\6\m\j\f\j\t\i\3\z\k\y\8\0\i\z\v\k\x\6\o\c\z\b\m\n\i\0\m\8\c\c\4\5\i\j\i\a\q\4\c\0\v\r\e\7\0\r\m\n\u\w\j\7\2\u\e\b\9\x\h\b\u\e\o\q\5\k\i\2\b\r\w\w\z\m\z\c\3\l\4\e\t\g\u\a\2\9\j\v\2\4\5\2\z\4\r\2\r\j\h\z\p\5\z\d\v\1\h\l\j\j\1\6\c\g\5\j\o\r\l\5\v\z\v\b\y\l\v\g\i\g\3\j\p\n\p\g\n\v\5\d\r\u\3\n\x\j\v\0\j\2\3\2\0\h\a\k\7\f\f\b\8\7\f\i\9\l\7\f\s\g\1\t\8\3\g\7\q\z\a\b\9\9\1\o\f\9\0\k\5\n\k\r\i\9\u\t\9\i\e\a\u\d\5\l\u\2\i\q\b\d\t\z\r\l\f\r\d\g\c\h\g\h\c\4\0\1\b\0\e\r\d\k\o\k\h\i\b\i\o\2\i\b\2\e\g\3\y\f\2\f\m\b\0\i\4\7\s\0\3\1\0\u\e\i\r\r\s\9\h\a\e\h\c\b\d\9\t\c\m\y\6\r\i\b\l\o\p\g\5\a\9\a\h\y\b\x\a\f\7\c\r\o\m\h\p\2\c\z\b\6\n\h\l\u\n\h\j\v\l\y\9\t\a\d\e\c\c\7\i\q\z\y\0\7\c\k\s\j\m\2\2\n\1\i\x\q\u\l\m\c\y\p\r\q\l\r\d\7\6\w\u\r\0\l\5\4\5\v\u\4\l\a\y\1\c\v\n\a\5\8\6\0\k\w\y\3\y\8\0\2\d\w\o\q\z\8\a\e\c\4\w\o\c\2\g\n\j\3\3\n\f\w\d\r\i\c\1\s\0\b\5\6\i\q\c\q\g\z\1\l\9\z\p\e\p\j\7\s\9\o\9\d\p\c\6\e\w\c\4\x\l\d\5\6\y\n\0\1\6\j\g\4\4\u\c\h\z\3\1\l\t\l\f\g\o\1\g\o\5\1\8\x\u\k\g\0\g\1\8\x\f\k\t\9\0\t\i\u\6\o\7\x\x\7\y\n\m\k\b\r\z\o\a\g\3\n\w\8\z\4\c\l\g\2\u\5\6\m\i\w\c\f\0\c\5\d\s\d\m\5\8\b\a\t\0\3\3\8\t\c\e\7\9\v\w\k\b\e\q\w\l\7\j\9\u\q\1\e\3\h\b\v\c\1\8\n\7\y\y\c\a\2\9\k\7\a\h\u\k\7\n\o\q\j\k\e\b\w\9\7\e\8\u\e\m\p\o\a\6\y\j\g\z\3\u\v\b\q\q\j\v\x\n\w\s\u\o\r\a\g\c\m\u\c\a\9\y\b\f\7\b\q\r\0\l\0\s\u\3\f\u\0\j\1\c\t\m\9\2\9\1\j\w\3\i\p\1\n\h\2\v\j ]] 00:06:53.488 00:06:53.488 real 0m1.249s 00:06:53.488 user 0m0.813s 00:06:53.488 sys 0m0.589s 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.488 ************************************ 00:06:53.488 END TEST dd_rw_offset 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:53.488 ************************************ 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:53.488 20:48:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:53.488 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:53.488 [2024-08-11 20:48:04.186269] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:53.488 [2024-08-11 20:48:04.186522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:06:53.488 { 00:06:53.488 "subsystems": [ 00:06:53.488 { 00:06:53.488 "subsystem": "bdev", 00:06:53.488 "config": [ 00:06:53.489 { 00:06:53.489 "params": { 00:06:53.489 "trtype": "pcie", 00:06:53.489 "traddr": "0000:00:10.0", 00:06:53.489 "name": "Nvme0" 00:06:53.489 }, 00:06:53.489 "method": "bdev_nvme_attach_controller" 00:06:53.489 }, 00:06:53.489 { 00:06:53.489 "method": "bdev_wait_for_examine" 00:06:53.489 } 00:06:53.489 ] 00:06:53.489 } 00:06:53.489 ] 00:06:53.489 } 00:06:53.746 [2024-08-11 20:48:04.322915] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.746 [2024-08-11 20:48:04.383604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.746 [2024-08-11 20:48:04.440831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.004  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:54.004 00:06:54.004 20:48:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.004 00:06:54.004 real 0m17.624s 00:06:54.004 user 0m12.534s 00:06:54.004 sys 0m6.667s 00:06:54.004 20:48:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.004 ************************************ 00:06:54.004 END TEST spdk_dd_basic_rw 00:06:54.004 ************************************ 00:06:54.004 20:48:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:54.004 20:48:04 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:54.004 20:48:04 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.004 20:48:04 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.004 20:48:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:54.262 ************************************ 00:06:54.262 START TEST spdk_dd_posix 00:06:54.262 ************************************ 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:54.262 * Looking for test storage... 00:06:54.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:54.262 * First test run, liburing in use 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:54.262 ************************************ 00:06:54.262 START TEST dd_flag_append 00:06:54.262 ************************************ 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=zi0d9lpi8qu1c3l4ygu9kvygrkdqeq37 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ayqyfkgkrd6153ou9q5t7uyup4mspz4c 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s zi0d9lpi8qu1c3l4ygu9kvygrkdqeq37 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ayqyfkgkrd6153ou9q5t7uyup4mspz4c 00:06:54.262 20:48:04 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:54.262 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:54.263 [2024-08-11 20:48:04.950047] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:54.263 [2024-08-11 20:48:04.950264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70826 ] 00:06:54.531 [2024-08-11 20:48:05.086644] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.531 [2024-08-11 20:48:05.140225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.531 [2024-08-11 20:48:05.192511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.804  Copying: 32/32 [B] (average 31 kBps) 00:06:54.804 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ayqyfkgkrd6153ou9q5t7uyup4mspz4czi0d9lpi8qu1c3l4ygu9kvygrkdqeq37 == \a\y\q\y\f\k\g\k\r\d\6\1\5\3\o\u\9\q\5\t\7\u\y\u\p\4\m\s\p\z\4\c\z\i\0\d\9\l\p\i\8\q\u\1\c\3\l\4\y\g\u\9\k\v\y\g\r\k\d\q\e\q\3\7 ]] 00:06:54.804 00:06:54.804 real 0m0.507s 00:06:54.804 user 0m0.251s 00:06:54.804 sys 0m0.268s 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.804 ************************************ 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:54.804 END TEST dd_flag_append 00:06:54.804 ************************************ 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:54.804 ************************************ 00:06:54.804 START TEST dd_flag_directory 00:06:54.804 ************************************ 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # local es=0 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.804 20:48:05 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.804 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:54.804 [2024-08-11 20:48:05.516896] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:54.804 [2024-08-11 20:48:05.516998] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:06:55.062 [2024-08-11 20:48:05.655085] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.062 [2024-08-11 20:48:05.733854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.062 [2024-08-11 20:48:05.787034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.062 [2024-08-11 20:48:05.822947] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.062 [2024-08-11 20:48:05.823004] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.062 [2024-08-11 20:48:05.823018] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.320 [2024-08-11 20:48:05.938993] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # es=236 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@658 -- # es=108 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # case "$es" in 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@666 -- # es=1 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # local es=0 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.320 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:55.320 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:55.320 [2024-08-11 20:48:06.084141] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:55.320 [2024-08-11 20:48:06.084399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70866 ] 00:06:55.577 [2024-08-11 20:48:06.222034] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.577 [2024-08-11 20:48:06.309853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.835 [2024-08-11 20:48:06.364878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.835 [2024-08-11 20:48:06.396926] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.835 [2024-08-11 20:48:06.396986] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.835 [2024-08-11 20:48:06.397001] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.835 [2024-08-11 20:48:06.506533] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@649 -- # es=236 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@658 -- # es=108 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # case "$es" in 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@666 -- # es=1 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:55.835 00:06:55.835 real 0m1.134s 00:06:55.835 user 0m0.622s 00:06:55.835 sys 0m0.300s 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.835 ************************************ 00:06:55.835 END TEST dd_flag_directory 00:06:55.835 ************************************ 00:06:55.835 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:56.093 ************************************ 00:06:56.093 START TEST dd_flag_nofollow 00:06:56.093 ************************************ 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # local es=0 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.093 20:48:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.093 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:56.093 [2024-08-11 20:48:06.719947] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:56.093 [2024-08-11 20:48:06.720126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70889 ] 00:06:56.093 [2024-08-11 20:48:06.851379] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.352 [2024-08-11 20:48:06.946178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.352 [2024-08-11 20:48:07.000524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.352 [2024-08-11 20:48:07.032168] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:56.352 [2024-08-11 20:48:07.032494] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:56.352 [2024-08-11 20:48:07.032529] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.611 [2024-08-11 20:48:07.143139] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # es=216 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@658 -- # es=88 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # case "$es" in 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@666 -- # es=1 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # local es=0 00:06:56.611 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.612 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.612 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:56.612 [2024-08-11 20:48:07.290791] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:56.612 [2024-08-11 20:48:07.290905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70904 ] 00:06:56.872 [2024-08-11 20:48:07.427925] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.872 [2024-08-11 20:48:07.520788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.872 [2024-08-11 20:48:07.574030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.872 [2024-08-11 20:48:07.606601] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:56.872 [2024-08-11 20:48:07.606671] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:56.872 [2024-08-11 20:48:07.606703] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.130 [2024-08-11 20:48:07.717252] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@649 -- # es=216 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@658 -- # es=88 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # case "$es" in 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@666 -- # es=1 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:57.131 20:48:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.131 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:57.131 [2024-08-11 20:48:07.870145] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:57.131 [2024-08-11 20:48:07.870248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70906 ] 00:06:57.389 [2024-08-11 20:48:08.006132] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.389 [2024-08-11 20:48:08.094661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.389 [2024-08-11 20:48:08.147815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.648  Copying: 512/512 [B] (average 500 kBps) 00:06:57.648 00:06:57.648 ************************************ 00:06:57.648 END TEST dd_flag_nofollow 00:06:57.648 ************************************ 00:06:57.648 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ i6e10olcdb9ylr193n7vqoxtgojxxevltdds03tf2ybj1oj6fptvhpvpg1vjkke62vuhk7s1til6md7k6uqwsjqn9q617ad88tckgtknx9q0nm2kqp1p4l7aewrddqw3bwvqseu2er4iq6zs1hx24lyzdg5oocxctbe21o3zwq72hlf279glmkfs42jy3kks87wwy50br2cacrho34p8tdovdn3jzozmvyivypd1ldvewm43auyll7r67e9xnuu72nuvpnohgqfljzn0g6udd0p8r26dsb3aec1qmd5emntq2bibfan2h5z6ph8l3879o299bx2dw9igpzdvga64he5qbiwonpz8yr7gsgr8cf8fn0e3tkpkt46muwvhacni1j0u48oancgs3ksw88b1sfr7wz1kclcd4g8ixduteco051uz4n93dakzg0dwyipkp89mhlgju2y9vcw1znfadhs5detyc35pipj8ikn7gei2h7bnl1xkw86x0ybdt0f6 == \i\6\e\1\0\o\l\c\d\b\9\y\l\r\1\9\3\n\7\v\q\o\x\t\g\o\j\x\x\e\v\l\t\d\d\s\0\3\t\f\2\y\b\j\1\o\j\6\f\p\t\v\h\p\v\p\g\1\v\j\k\k\e\6\2\v\u\h\k\7\s\1\t\i\l\6\m\d\7\k\6\u\q\w\s\j\q\n\9\q\6\1\7\a\d\8\8\t\c\k\g\t\k\n\x\9\q\0\n\m\2\k\q\p\1\p\4\l\7\a\e\w\r\d\d\q\w\3\b\w\v\q\s\e\u\2\e\r\4\i\q\6\z\s\1\h\x\2\4\l\y\z\d\g\5\o\o\c\x\c\t\b\e\2\1\o\3\z\w\q\7\2\h\l\f\2\7\9\g\l\m\k\f\s\4\2\j\y\3\k\k\s\8\7\w\w\y\5\0\b\r\2\c\a\c\r\h\o\3\4\p\8\t\d\o\v\d\n\3\j\z\o\z\m\v\y\i\v\y\p\d\1\l\d\v\e\w\m\4\3\a\u\y\l\l\7\r\6\7\e\9\x\n\u\u\7\2\n\u\v\p\n\o\h\g\q\f\l\j\z\n\0\g\6\u\d\d\0\p\8\r\2\6\d\s\b\3\a\e\c\1\q\m\d\5\e\m\n\t\q\2\b\i\b\f\a\n\2\h\5\z\6\p\h\8\l\3\8\7\9\o\2\9\9\b\x\2\d\w\9\i\g\p\z\d\v\g\a\6\4\h\e\5\q\b\i\w\o\n\p\z\8\y\r\7\g\s\g\r\8\c\f\8\f\n\0\e\3\t\k\p\k\t\4\6\m\u\w\v\h\a\c\n\i\1\j\0\u\4\8\o\a\n\c\g\s\3\k\s\w\8\8\b\1\s\f\r\7\w\z\1\k\c\l\c\d\4\g\8\i\x\d\u\t\e\c\o\0\5\1\u\z\4\n\9\3\d\a\k\z\g\0\d\w\y\i\p\k\p\8\9\m\h\l\g\j\u\2\y\9\v\c\w\1\z\n\f\a\d\h\s\5\d\e\t\y\c\3\5\p\i\p\j\8\i\k\n\7\g\e\i\2\h\7\b\n\l\1\x\k\w\8\6\x\0\y\b\d\t\0\f\6 ]] 00:06:57.648 00:06:57.648 real 0m1.726s 00:06:57.648 user 0m0.945s 00:06:57.648 sys 0m0.578s 00:06:57.648 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.648 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:57.907 ************************************ 00:06:57.907 START TEST dd_flag_noatime 00:06:57.907 ************************************ 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1723409288 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1723409288 00:06:57.907 20:48:08 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:58.841 20:48:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.841 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:58.841 [2024-08-11 20:48:09.517921] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:58.842 [2024-08-11 20:48:09.518042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70954 ] 00:06:59.099 [2024-08-11 20:48:09.655610] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.099 [2024-08-11 20:48:09.742485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.100 [2024-08-11 20:48:09.796961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.357  Copying: 512/512 [B] (average 500 kBps) 00:06:59.357 00:06:59.357 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.357 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1723409288 )) 00:06:59.357 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.357 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1723409288 )) 00:06:59.357 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.357 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:06:59.358 [2024-08-11 20:48:10.093243] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:06:59.358 [2024-08-11 20:48:10.093506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70973 ] 00:06:59.615 [2024-08-11 20:48:10.230350] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.615 [2024-08-11 20:48:10.320107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.615 [2024-08-11 20:48:10.373033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.874  Copying: 512/512 [B] (average 500 kBps) 00:06:59.874 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1723409290 )) 00:06:59.874 00:06:59.874 real 0m2.162s 00:06:59.874 user 0m0.623s 00:06:59.874 sys 0m0.561s 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:59.874 ************************************ 00:06:59.874 END TEST dd_flag_noatime 00:06:59.874 ************************************ 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.874 20:48:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.132 ************************************ 00:07:00.132 START TEST dd_flags_misc 00:07:00.133 ************************************ 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:00.133 20:48:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:00.133 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:00.133 [2024-08-11 20:48:10.709066] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:00.133 [2024-08-11 20:48:10.709159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:07:00.133 [2024-08-11 20:48:10.836727] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.133 [2024-08-11 20:48:10.907174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.391 [2024-08-11 20:48:10.964528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.650  Copying: 512/512 [B] (average 500 kBps) 00:07:00.650 00:07:00.650 20:48:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t4b30rs4g0mhtrtfb2i9vgcty2r3vn0fwqhoo8dx8k55y8c5eg5ogcqolqkb4rj9mel79bir0gsbxo0p1i0n8iic3zpe90pzjhrjapcicct2z7shk9t3bo67wwp8s0neatnepsiuilklsi4sti6y9r5scqp5wngxshe0920nqflfur5jxceuqp7dl0bpss9h9l84dlenrqq2v8atkdcnenh7g1ly95haozz49zfwovxeu1n6w8aizoqyh2wpi9llm6ucsmxre5bnnypaulxrzu553plqepriskvoosu6bg5vqubiflbm26wvj1ryp18p86etmbjj7kz4t6ess86l5l3th9cwe1mgo7sdgyouyjaa6rts16joy8mfiqryemdakdssw9o8zh4m3iyxxpxk3c1jyqhlc5rqjos5xdtsjolbfd05t6pwvqxb1bshpmjfwi22qqzwztdiwvlro1ubghobz26227anlf6letlmkejfl0nzxoe4g3q8ca4gfh96 == \t\4\b\3\0\r\s\4\g\0\m\h\t\r\t\f\b\2\i\9\v\g\c\t\y\2\r\3\v\n\0\f\w\q\h\o\o\8\d\x\8\k\5\5\y\8\c\5\e\g\5\o\g\c\q\o\l\q\k\b\4\r\j\9\m\e\l\7\9\b\i\r\0\g\s\b\x\o\0\p\1\i\0\n\8\i\i\c\3\z\p\e\9\0\p\z\j\h\r\j\a\p\c\i\c\c\t\2\z\7\s\h\k\9\t\3\b\o\6\7\w\w\p\8\s\0\n\e\a\t\n\e\p\s\i\u\i\l\k\l\s\i\4\s\t\i\6\y\9\r\5\s\c\q\p\5\w\n\g\x\s\h\e\0\9\2\0\n\q\f\l\f\u\r\5\j\x\c\e\u\q\p\7\d\l\0\b\p\s\s\9\h\9\l\8\4\d\l\e\n\r\q\q\2\v\8\a\t\k\d\c\n\e\n\h\7\g\1\l\y\9\5\h\a\o\z\z\4\9\z\f\w\o\v\x\e\u\1\n\6\w\8\a\i\z\o\q\y\h\2\w\p\i\9\l\l\m\6\u\c\s\m\x\r\e\5\b\n\n\y\p\a\u\l\x\r\z\u\5\5\3\p\l\q\e\p\r\i\s\k\v\o\o\s\u\6\b\g\5\v\q\u\b\i\f\l\b\m\2\6\w\v\j\1\r\y\p\1\8\p\8\6\e\t\m\b\j\j\7\k\z\4\t\6\e\s\s\8\6\l\5\l\3\t\h\9\c\w\e\1\m\g\o\7\s\d\g\y\o\u\y\j\a\a\6\r\t\s\1\6\j\o\y\8\m\f\i\q\r\y\e\m\d\a\k\d\s\s\w\9\o\8\z\h\4\m\3\i\y\x\x\p\x\k\3\c\1\j\y\q\h\l\c\5\r\q\j\o\s\5\x\d\t\s\j\o\l\b\f\d\0\5\t\6\p\w\v\q\x\b\1\b\s\h\p\m\j\f\w\i\2\2\q\q\z\w\z\t\d\i\w\v\l\r\o\1\u\b\g\h\o\b\z\2\6\2\2\7\a\n\l\f\6\l\e\t\l\m\k\e\j\f\l\0\n\z\x\o\e\4\g\3\q\8\c\a\4\g\f\h\9\6 ]] 00:07:00.650 20:48:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:00.650 20:48:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:00.650 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:00.650 [2024-08-11 20:48:11.251343] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:00.650 [2024-08-11 20:48:11.251446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71011 ] 00:07:00.650 [2024-08-11 20:48:11.389396] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.909 [2024-08-11 20:48:11.478451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.909 [2024-08-11 20:48:11.532081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.167  Copying: 512/512 [B] (average 500 kBps) 00:07:01.167 00:07:01.168 20:48:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t4b30rs4g0mhtrtfb2i9vgcty2r3vn0fwqhoo8dx8k55y8c5eg5ogcqolqkb4rj9mel79bir0gsbxo0p1i0n8iic3zpe90pzjhrjapcicct2z7shk9t3bo67wwp8s0neatnepsiuilklsi4sti6y9r5scqp5wngxshe0920nqflfur5jxceuqp7dl0bpss9h9l84dlenrqq2v8atkdcnenh7g1ly95haozz49zfwovxeu1n6w8aizoqyh2wpi9llm6ucsmxre5bnnypaulxrzu553plqepriskvoosu6bg5vqubiflbm26wvj1ryp18p86etmbjj7kz4t6ess86l5l3th9cwe1mgo7sdgyouyjaa6rts16joy8mfiqryemdakdssw9o8zh4m3iyxxpxk3c1jyqhlc5rqjos5xdtsjolbfd05t6pwvqxb1bshpmjfwi22qqzwztdiwvlro1ubghobz26227anlf6letlmkejfl0nzxoe4g3q8ca4gfh96 == \t\4\b\3\0\r\s\4\g\0\m\h\t\r\t\f\b\2\i\9\v\g\c\t\y\2\r\3\v\n\0\f\w\q\h\o\o\8\d\x\8\k\5\5\y\8\c\5\e\g\5\o\g\c\q\o\l\q\k\b\4\r\j\9\m\e\l\7\9\b\i\r\0\g\s\b\x\o\0\p\1\i\0\n\8\i\i\c\3\z\p\e\9\0\p\z\j\h\r\j\a\p\c\i\c\c\t\2\z\7\s\h\k\9\t\3\b\o\6\7\w\w\p\8\s\0\n\e\a\t\n\e\p\s\i\u\i\l\k\l\s\i\4\s\t\i\6\y\9\r\5\s\c\q\p\5\w\n\g\x\s\h\e\0\9\2\0\n\q\f\l\f\u\r\5\j\x\c\e\u\q\p\7\d\l\0\b\p\s\s\9\h\9\l\8\4\d\l\e\n\r\q\q\2\v\8\a\t\k\d\c\n\e\n\h\7\g\1\l\y\9\5\h\a\o\z\z\4\9\z\f\w\o\v\x\e\u\1\n\6\w\8\a\i\z\o\q\y\h\2\w\p\i\9\l\l\m\6\u\c\s\m\x\r\e\5\b\n\n\y\p\a\u\l\x\r\z\u\5\5\3\p\l\q\e\p\r\i\s\k\v\o\o\s\u\6\b\g\5\v\q\u\b\i\f\l\b\m\2\6\w\v\j\1\r\y\p\1\8\p\8\6\e\t\m\b\j\j\7\k\z\4\t\6\e\s\s\8\6\l\5\l\3\t\h\9\c\w\e\1\m\g\o\7\s\d\g\y\o\u\y\j\a\a\6\r\t\s\1\6\j\o\y\8\m\f\i\q\r\y\e\m\d\a\k\d\s\s\w\9\o\8\z\h\4\m\3\i\y\x\x\p\x\k\3\c\1\j\y\q\h\l\c\5\r\q\j\o\s\5\x\d\t\s\j\o\l\b\f\d\0\5\t\6\p\w\v\q\x\b\1\b\s\h\p\m\j\f\w\i\2\2\q\q\z\w\z\t\d\i\w\v\l\r\o\1\u\b\g\h\o\b\z\2\6\2\2\7\a\n\l\f\6\l\e\t\l\m\k\e\j\f\l\0\n\z\x\o\e\4\g\3\q\8\c\a\4\g\f\h\9\6 ]] 00:07:01.168 20:48:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.168 20:48:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:01.168 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:01.168 [2024-08-11 20:48:11.811997] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:01.168 [2024-08-11 20:48:11.812099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71015 ] 00:07:01.426 [2024-08-11 20:48:11.948092] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.426 [2024-08-11 20:48:12.040247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.426 [2024-08-11 20:48:12.095223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.685  Copying: 512/512 [B] (average 83 kBps) 00:07:01.685 00:07:01.685 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t4b30rs4g0mhtrtfb2i9vgcty2r3vn0fwqhoo8dx8k55y8c5eg5ogcqolqkb4rj9mel79bir0gsbxo0p1i0n8iic3zpe90pzjhrjapcicct2z7shk9t3bo67wwp8s0neatnepsiuilklsi4sti6y9r5scqp5wngxshe0920nqflfur5jxceuqp7dl0bpss9h9l84dlenrqq2v8atkdcnenh7g1ly95haozz49zfwovxeu1n6w8aizoqyh2wpi9llm6ucsmxre5bnnypaulxrzu553plqepriskvoosu6bg5vqubiflbm26wvj1ryp18p86etmbjj7kz4t6ess86l5l3th9cwe1mgo7sdgyouyjaa6rts16joy8mfiqryemdakdssw9o8zh4m3iyxxpxk3c1jyqhlc5rqjos5xdtsjolbfd05t6pwvqxb1bshpmjfwi22qqzwztdiwvlro1ubghobz26227anlf6letlmkejfl0nzxoe4g3q8ca4gfh96 == \t\4\b\3\0\r\s\4\g\0\m\h\t\r\t\f\b\2\i\9\v\g\c\t\y\2\r\3\v\n\0\f\w\q\h\o\o\8\d\x\8\k\5\5\y\8\c\5\e\g\5\o\g\c\q\o\l\q\k\b\4\r\j\9\m\e\l\7\9\b\i\r\0\g\s\b\x\o\0\p\1\i\0\n\8\i\i\c\3\z\p\e\9\0\p\z\j\h\r\j\a\p\c\i\c\c\t\2\z\7\s\h\k\9\t\3\b\o\6\7\w\w\p\8\s\0\n\e\a\t\n\e\p\s\i\u\i\l\k\l\s\i\4\s\t\i\6\y\9\r\5\s\c\q\p\5\w\n\g\x\s\h\e\0\9\2\0\n\q\f\l\f\u\r\5\j\x\c\e\u\q\p\7\d\l\0\b\p\s\s\9\h\9\l\8\4\d\l\e\n\r\q\q\2\v\8\a\t\k\d\c\n\e\n\h\7\g\1\l\y\9\5\h\a\o\z\z\4\9\z\f\w\o\v\x\e\u\1\n\6\w\8\a\i\z\o\q\y\h\2\w\p\i\9\l\l\m\6\u\c\s\m\x\r\e\5\b\n\n\y\p\a\u\l\x\r\z\u\5\5\3\p\l\q\e\p\r\i\s\k\v\o\o\s\u\6\b\g\5\v\q\u\b\i\f\l\b\m\2\6\w\v\j\1\r\y\p\1\8\p\8\6\e\t\m\b\j\j\7\k\z\4\t\6\e\s\s\8\6\l\5\l\3\t\h\9\c\w\e\1\m\g\o\7\s\d\g\y\o\u\y\j\a\a\6\r\t\s\1\6\j\o\y\8\m\f\i\q\r\y\e\m\d\a\k\d\s\s\w\9\o\8\z\h\4\m\3\i\y\x\x\p\x\k\3\c\1\j\y\q\h\l\c\5\r\q\j\o\s\5\x\d\t\s\j\o\l\b\f\d\0\5\t\6\p\w\v\q\x\b\1\b\s\h\p\m\j\f\w\i\2\2\q\q\z\w\z\t\d\i\w\v\l\r\o\1\u\b\g\h\o\b\z\2\6\2\2\7\a\n\l\f\6\l\e\t\l\m\k\e\j\f\l\0\n\z\x\o\e\4\g\3\q\8\c\a\4\g\f\h\9\6 ]] 00:07:01.685 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.685 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:01.685 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:01.685 [2024-08-11 20:48:12.369459] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:01.685 [2024-08-11 20:48:12.369544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:07:01.944 [2024-08-11 20:48:12.498650] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.944 [2024-08-11 20:48:12.559457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.944 [2024-08-11 20:48:12.611915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.203  Copying: 512/512 [B] (average 166 kBps) 00:07:02.203 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t4b30rs4g0mhtrtfb2i9vgcty2r3vn0fwqhoo8dx8k55y8c5eg5ogcqolqkb4rj9mel79bir0gsbxo0p1i0n8iic3zpe90pzjhrjapcicct2z7shk9t3bo67wwp8s0neatnepsiuilklsi4sti6y9r5scqp5wngxshe0920nqflfur5jxceuqp7dl0bpss9h9l84dlenrqq2v8atkdcnenh7g1ly95haozz49zfwovxeu1n6w8aizoqyh2wpi9llm6ucsmxre5bnnypaulxrzu553plqepriskvoosu6bg5vqubiflbm26wvj1ryp18p86etmbjj7kz4t6ess86l5l3th9cwe1mgo7sdgyouyjaa6rts16joy8mfiqryemdakdssw9o8zh4m3iyxxpxk3c1jyqhlc5rqjos5xdtsjolbfd05t6pwvqxb1bshpmjfwi22qqzwztdiwvlro1ubghobz26227anlf6letlmkejfl0nzxoe4g3q8ca4gfh96 == \t\4\b\3\0\r\s\4\g\0\m\h\t\r\t\f\b\2\i\9\v\g\c\t\y\2\r\3\v\n\0\f\w\q\h\o\o\8\d\x\8\k\5\5\y\8\c\5\e\g\5\o\g\c\q\o\l\q\k\b\4\r\j\9\m\e\l\7\9\b\i\r\0\g\s\b\x\o\0\p\1\i\0\n\8\i\i\c\3\z\p\e\9\0\p\z\j\h\r\j\a\p\c\i\c\c\t\2\z\7\s\h\k\9\t\3\b\o\6\7\w\w\p\8\s\0\n\e\a\t\n\e\p\s\i\u\i\l\k\l\s\i\4\s\t\i\6\y\9\r\5\s\c\q\p\5\w\n\g\x\s\h\e\0\9\2\0\n\q\f\l\f\u\r\5\j\x\c\e\u\q\p\7\d\l\0\b\p\s\s\9\h\9\l\8\4\d\l\e\n\r\q\q\2\v\8\a\t\k\d\c\n\e\n\h\7\g\1\l\y\9\5\h\a\o\z\z\4\9\z\f\w\o\v\x\e\u\1\n\6\w\8\a\i\z\o\q\y\h\2\w\p\i\9\l\l\m\6\u\c\s\m\x\r\e\5\b\n\n\y\p\a\u\l\x\r\z\u\5\5\3\p\l\q\e\p\r\i\s\k\v\o\o\s\u\6\b\g\5\v\q\u\b\i\f\l\b\m\2\6\w\v\j\1\r\y\p\1\8\p\8\6\e\t\m\b\j\j\7\k\z\4\t\6\e\s\s\8\6\l\5\l\3\t\h\9\c\w\e\1\m\g\o\7\s\d\g\y\o\u\y\j\a\a\6\r\t\s\1\6\j\o\y\8\m\f\i\q\r\y\e\m\d\a\k\d\s\s\w\9\o\8\z\h\4\m\3\i\y\x\x\p\x\k\3\c\1\j\y\q\h\l\c\5\r\q\j\o\s\5\x\d\t\s\j\o\l\b\f\d\0\5\t\6\p\w\v\q\x\b\1\b\s\h\p\m\j\f\w\i\2\2\q\q\z\w\z\t\d\i\w\v\l\r\o\1\u\b\g\h\o\b\z\2\6\2\2\7\a\n\l\f\6\l\e\t\l\m\k\e\j\f\l\0\n\z\x\o\e\4\g\3\q\8\c\a\4\g\f\h\9\6 ]] 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.203 20:48:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:02.203 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:02.203 [2024-08-11 20:48:12.887663] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:02.203 [2024-08-11 20:48:12.887884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71034 ] 00:07:02.463 [2024-08-11 20:48:13.024776] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.463 [2024-08-11 20:48:13.098940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.463 [2024-08-11 20:48:13.157425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.721  Copying: 512/512 [B] (average 500 kBps) 00:07:02.721 00:07:02.721 20:48:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tbj9t3qirkvtoij0ki9juo6d0iu9zj71lnhwzuddfkggobg8csuhyx66qlaqyfhtfw93nega0axx75asg6ueuhma6kih7bnl8x671h475kwthak5e0y6bqmr7xi4rizj9a1ah73f7wz1dn64xh4d8g6x549n52zzob0avg1hbfd4d9lwhjdh9bot8es6g060dwt5f4f9dvxr3rogfgj3w5vb7g57v7t7y45d0cxptd4g2ffdwy4puya8bhk1t5dwv6q924vz4dfpqt14s7fpu9ey4pt09k9o3ex1qmi22xamv3c7go7nu7vfaxvndr68d3gw5htneirevy8w8tldlc10w0wfbu5qtghdg06n0nn0h2ovok4f4wush8rsz0iw6zm32j37rznjv3etbfri6jd1ld78km6xo54tifoh65l4ddh4xxspccrkpdun2lcrjwrpc7cowgcd5e2nyj24pkvfnhceatqiq4226060nu3nknkh9canixmynezi4dkw == \t\b\j\9\t\3\q\i\r\k\v\t\o\i\j\0\k\i\9\j\u\o\6\d\0\i\u\9\z\j\7\1\l\n\h\w\z\u\d\d\f\k\g\g\o\b\g\8\c\s\u\h\y\x\6\6\q\l\a\q\y\f\h\t\f\w\9\3\n\e\g\a\0\a\x\x\7\5\a\s\g\6\u\e\u\h\m\a\6\k\i\h\7\b\n\l\8\x\6\7\1\h\4\7\5\k\w\t\h\a\k\5\e\0\y\6\b\q\m\r\7\x\i\4\r\i\z\j\9\a\1\a\h\7\3\f\7\w\z\1\d\n\6\4\x\h\4\d\8\g\6\x\5\4\9\n\5\2\z\z\o\b\0\a\v\g\1\h\b\f\d\4\d\9\l\w\h\j\d\h\9\b\o\t\8\e\s\6\g\0\6\0\d\w\t\5\f\4\f\9\d\v\x\r\3\r\o\g\f\g\j\3\w\5\v\b\7\g\5\7\v\7\t\7\y\4\5\d\0\c\x\p\t\d\4\g\2\f\f\d\w\y\4\p\u\y\a\8\b\h\k\1\t\5\d\w\v\6\q\9\2\4\v\z\4\d\f\p\q\t\1\4\s\7\f\p\u\9\e\y\4\p\t\0\9\k\9\o\3\e\x\1\q\m\i\2\2\x\a\m\v\3\c\7\g\o\7\n\u\7\v\f\a\x\v\n\d\r\6\8\d\3\g\w\5\h\t\n\e\i\r\e\v\y\8\w\8\t\l\d\l\c\1\0\w\0\w\f\b\u\5\q\t\g\h\d\g\0\6\n\0\n\n\0\h\2\o\v\o\k\4\f\4\w\u\s\h\8\r\s\z\0\i\w\6\z\m\3\2\j\3\7\r\z\n\j\v\3\e\t\b\f\r\i\6\j\d\1\l\d\7\8\k\m\6\x\o\5\4\t\i\f\o\h\6\5\l\4\d\d\h\4\x\x\s\p\c\c\r\k\p\d\u\n\2\l\c\r\j\w\r\p\c\7\c\o\w\g\c\d\5\e\2\n\y\j\2\4\p\k\v\f\n\h\c\e\a\t\q\i\q\4\2\2\6\0\6\0\n\u\3\n\k\n\k\h\9\c\a\n\i\x\m\y\n\e\z\i\4\d\k\w ]] 00:07:02.721 20:48:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.721 20:48:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:02.721 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:02.721 [2024-08-11 20:48:13.434652] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:02.721 [2024-08-11 20:48:13.434748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:07:02.980 [2024-08-11 20:48:13.570475] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.980 [2024-08-11 20:48:13.634216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.980 [2024-08-11 20:48:13.688072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.239  Copying: 512/512 [B] (average 500 kBps) 00:07:03.239 00:07:03.239 20:48:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tbj9t3qirkvtoij0ki9juo6d0iu9zj71lnhwzuddfkggobg8csuhyx66qlaqyfhtfw93nega0axx75asg6ueuhma6kih7bnl8x671h475kwthak5e0y6bqmr7xi4rizj9a1ah73f7wz1dn64xh4d8g6x549n52zzob0avg1hbfd4d9lwhjdh9bot8es6g060dwt5f4f9dvxr3rogfgj3w5vb7g57v7t7y45d0cxptd4g2ffdwy4puya8bhk1t5dwv6q924vz4dfpqt14s7fpu9ey4pt09k9o3ex1qmi22xamv3c7go7nu7vfaxvndr68d3gw5htneirevy8w8tldlc10w0wfbu5qtghdg06n0nn0h2ovok4f4wush8rsz0iw6zm32j37rznjv3etbfri6jd1ld78km6xo54tifoh65l4ddh4xxspccrkpdun2lcrjwrpc7cowgcd5e2nyj24pkvfnhceatqiq4226060nu3nknkh9canixmynezi4dkw == \t\b\j\9\t\3\q\i\r\k\v\t\o\i\j\0\k\i\9\j\u\o\6\d\0\i\u\9\z\j\7\1\l\n\h\w\z\u\d\d\f\k\g\g\o\b\g\8\c\s\u\h\y\x\6\6\q\l\a\q\y\f\h\t\f\w\9\3\n\e\g\a\0\a\x\x\7\5\a\s\g\6\u\e\u\h\m\a\6\k\i\h\7\b\n\l\8\x\6\7\1\h\4\7\5\k\w\t\h\a\k\5\e\0\y\6\b\q\m\r\7\x\i\4\r\i\z\j\9\a\1\a\h\7\3\f\7\w\z\1\d\n\6\4\x\h\4\d\8\g\6\x\5\4\9\n\5\2\z\z\o\b\0\a\v\g\1\h\b\f\d\4\d\9\l\w\h\j\d\h\9\b\o\t\8\e\s\6\g\0\6\0\d\w\t\5\f\4\f\9\d\v\x\r\3\r\o\g\f\g\j\3\w\5\v\b\7\g\5\7\v\7\t\7\y\4\5\d\0\c\x\p\t\d\4\g\2\f\f\d\w\y\4\p\u\y\a\8\b\h\k\1\t\5\d\w\v\6\q\9\2\4\v\z\4\d\f\p\q\t\1\4\s\7\f\p\u\9\e\y\4\p\t\0\9\k\9\o\3\e\x\1\q\m\i\2\2\x\a\m\v\3\c\7\g\o\7\n\u\7\v\f\a\x\v\n\d\r\6\8\d\3\g\w\5\h\t\n\e\i\r\e\v\y\8\w\8\t\l\d\l\c\1\0\w\0\w\f\b\u\5\q\t\g\h\d\g\0\6\n\0\n\n\0\h\2\o\v\o\k\4\f\4\w\u\s\h\8\r\s\z\0\i\w\6\z\m\3\2\j\3\7\r\z\n\j\v\3\e\t\b\f\r\i\6\j\d\1\l\d\7\8\k\m\6\x\o\5\4\t\i\f\o\h\6\5\l\4\d\d\h\4\x\x\s\p\c\c\r\k\p\d\u\n\2\l\c\r\j\w\r\p\c\7\c\o\w\g\c\d\5\e\2\n\y\j\2\4\p\k\v\f\n\h\c\e\a\t\q\i\q\4\2\2\6\0\6\0\n\u\3\n\k\n\k\h\9\c\a\n\i\x\m\y\n\e\z\i\4\d\k\w ]] 00:07:03.239 20:48:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.239 20:48:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:03.239 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:03.239 [2024-08-11 20:48:13.950584] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:03.239 [2024-08-11 20:48:13.950967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71053 ] 00:07:03.508 [2024-08-11 20:48:14.084034] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.508 [2024-08-11 20:48:14.145766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.508 [2024-08-11 20:48:14.201755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.787  Copying: 512/512 [B] (average 166 kBps) 00:07:03.787 00:07:03.787 20:48:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tbj9t3qirkvtoij0ki9juo6d0iu9zj71lnhwzuddfkggobg8csuhyx66qlaqyfhtfw93nega0axx75asg6ueuhma6kih7bnl8x671h475kwthak5e0y6bqmr7xi4rizj9a1ah73f7wz1dn64xh4d8g6x549n52zzob0avg1hbfd4d9lwhjdh9bot8es6g060dwt5f4f9dvxr3rogfgj3w5vb7g57v7t7y45d0cxptd4g2ffdwy4puya8bhk1t5dwv6q924vz4dfpqt14s7fpu9ey4pt09k9o3ex1qmi22xamv3c7go7nu7vfaxvndr68d3gw5htneirevy8w8tldlc10w0wfbu5qtghdg06n0nn0h2ovok4f4wush8rsz0iw6zm32j37rznjv3etbfri6jd1ld78km6xo54tifoh65l4ddh4xxspccrkpdun2lcrjwrpc7cowgcd5e2nyj24pkvfnhceatqiq4226060nu3nknkh9canixmynezi4dkw == \t\b\j\9\t\3\q\i\r\k\v\t\o\i\j\0\k\i\9\j\u\o\6\d\0\i\u\9\z\j\7\1\l\n\h\w\z\u\d\d\f\k\g\g\o\b\g\8\c\s\u\h\y\x\6\6\q\l\a\q\y\f\h\t\f\w\9\3\n\e\g\a\0\a\x\x\7\5\a\s\g\6\u\e\u\h\m\a\6\k\i\h\7\b\n\l\8\x\6\7\1\h\4\7\5\k\w\t\h\a\k\5\e\0\y\6\b\q\m\r\7\x\i\4\r\i\z\j\9\a\1\a\h\7\3\f\7\w\z\1\d\n\6\4\x\h\4\d\8\g\6\x\5\4\9\n\5\2\z\z\o\b\0\a\v\g\1\h\b\f\d\4\d\9\l\w\h\j\d\h\9\b\o\t\8\e\s\6\g\0\6\0\d\w\t\5\f\4\f\9\d\v\x\r\3\r\o\g\f\g\j\3\w\5\v\b\7\g\5\7\v\7\t\7\y\4\5\d\0\c\x\p\t\d\4\g\2\f\f\d\w\y\4\p\u\y\a\8\b\h\k\1\t\5\d\w\v\6\q\9\2\4\v\z\4\d\f\p\q\t\1\4\s\7\f\p\u\9\e\y\4\p\t\0\9\k\9\o\3\e\x\1\q\m\i\2\2\x\a\m\v\3\c\7\g\o\7\n\u\7\v\f\a\x\v\n\d\r\6\8\d\3\g\w\5\h\t\n\e\i\r\e\v\y\8\w\8\t\l\d\l\c\1\0\w\0\w\f\b\u\5\q\t\g\h\d\g\0\6\n\0\n\n\0\h\2\o\v\o\k\4\f\4\w\u\s\h\8\r\s\z\0\i\w\6\z\m\3\2\j\3\7\r\z\n\j\v\3\e\t\b\f\r\i\6\j\d\1\l\d\7\8\k\m\6\x\o\5\4\t\i\f\o\h\6\5\l\4\d\d\h\4\x\x\s\p\c\c\r\k\p\d\u\n\2\l\c\r\j\w\r\p\c\7\c\o\w\g\c\d\5\e\2\n\y\j\2\4\p\k\v\f\n\h\c\e\a\t\q\i\q\4\2\2\6\0\6\0\n\u\3\n\k\n\k\h\9\c\a\n\i\x\m\y\n\e\z\i\4\d\k\w ]] 00:07:03.787 20:48:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.787 20:48:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:03.787 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:03.787 [2024-08-11 20:48:14.476336] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:03.787 [2024-08-11 20:48:14.476438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71068 ] 00:07:04.046 [2024-08-11 20:48:14.614498] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.046 [2024-08-11 20:48:14.682409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.046 [2024-08-11 20:48:14.737114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.305  Copying: 512/512 [B] (average 250 kBps) 00:07:04.305 00:07:04.305 ************************************ 00:07:04.305 END TEST dd_flags_misc 00:07:04.305 ************************************ 00:07:04.305 20:48:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tbj9t3qirkvtoij0ki9juo6d0iu9zj71lnhwzuddfkggobg8csuhyx66qlaqyfhtfw93nega0axx75asg6ueuhma6kih7bnl8x671h475kwthak5e0y6bqmr7xi4rizj9a1ah73f7wz1dn64xh4d8g6x549n52zzob0avg1hbfd4d9lwhjdh9bot8es6g060dwt5f4f9dvxr3rogfgj3w5vb7g57v7t7y45d0cxptd4g2ffdwy4puya8bhk1t5dwv6q924vz4dfpqt14s7fpu9ey4pt09k9o3ex1qmi22xamv3c7go7nu7vfaxvndr68d3gw5htneirevy8w8tldlc10w0wfbu5qtghdg06n0nn0h2ovok4f4wush8rsz0iw6zm32j37rznjv3etbfri6jd1ld78km6xo54tifoh65l4ddh4xxspccrkpdun2lcrjwrpc7cowgcd5e2nyj24pkvfnhceatqiq4226060nu3nknkh9canixmynezi4dkw == \t\b\j\9\t\3\q\i\r\k\v\t\o\i\j\0\k\i\9\j\u\o\6\d\0\i\u\9\z\j\7\1\l\n\h\w\z\u\d\d\f\k\g\g\o\b\g\8\c\s\u\h\y\x\6\6\q\l\a\q\y\f\h\t\f\w\9\3\n\e\g\a\0\a\x\x\7\5\a\s\g\6\u\e\u\h\m\a\6\k\i\h\7\b\n\l\8\x\6\7\1\h\4\7\5\k\w\t\h\a\k\5\e\0\y\6\b\q\m\r\7\x\i\4\r\i\z\j\9\a\1\a\h\7\3\f\7\w\z\1\d\n\6\4\x\h\4\d\8\g\6\x\5\4\9\n\5\2\z\z\o\b\0\a\v\g\1\h\b\f\d\4\d\9\l\w\h\j\d\h\9\b\o\t\8\e\s\6\g\0\6\0\d\w\t\5\f\4\f\9\d\v\x\r\3\r\o\g\f\g\j\3\w\5\v\b\7\g\5\7\v\7\t\7\y\4\5\d\0\c\x\p\t\d\4\g\2\f\f\d\w\y\4\p\u\y\a\8\b\h\k\1\t\5\d\w\v\6\q\9\2\4\v\z\4\d\f\p\q\t\1\4\s\7\f\p\u\9\e\y\4\p\t\0\9\k\9\o\3\e\x\1\q\m\i\2\2\x\a\m\v\3\c\7\g\o\7\n\u\7\v\f\a\x\v\n\d\r\6\8\d\3\g\w\5\h\t\n\e\i\r\e\v\y\8\w\8\t\l\d\l\c\1\0\w\0\w\f\b\u\5\q\t\g\h\d\g\0\6\n\0\n\n\0\h\2\o\v\o\k\4\f\4\w\u\s\h\8\r\s\z\0\i\w\6\z\m\3\2\j\3\7\r\z\n\j\v\3\e\t\b\f\r\i\6\j\d\1\l\d\7\8\k\m\6\x\o\5\4\t\i\f\o\h\6\5\l\4\d\d\h\4\x\x\s\p\c\c\r\k\p\d\u\n\2\l\c\r\j\w\r\p\c\7\c\o\w\g\c\d\5\e\2\n\y\j\2\4\p\k\v\f\n\h\c\e\a\t\q\i\q\4\2\2\6\0\6\0\n\u\3\n\k\n\k\h\9\c\a\n\i\x\m\y\n\e\z\i\4\d\k\w ]] 00:07:04.305 00:07:04.305 real 0m4.302s 00:07:04.305 user 0m2.232s 00:07:04.305 sys 0m2.229s 00:07:04.305 20:48:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.305 20:48:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:04.305 20:48:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:04.305 20:48:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:04.305 * Second test run, disabling liburing, forcing AIO 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 ************************************ 00:07:04.306 START TEST dd_flag_append_forced_aio 00:07:04.306 ************************************ 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=ztvirb48pr6bti0vnzxvkrhv1b5ms9zv 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=vq7sixpha38gp6txiv3z3w1pw1jhm0d8 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s ztvirb48pr6bti0vnzxvkrhv1b5ms9zv 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s vq7sixpha38gp6txiv3z3w1pw1jhm0d8 00:07:04.306 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:04.306 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:04.306 [2024-08-11 20:48:15.070083] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:04.306 [2024-08-11 20:48:15.070333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71092 ] 00:07:04.564 [2024-08-11 20:48:15.201900] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.564 [2024-08-11 20:48:15.282447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.823 [2024-08-11 20:48:15.342863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.082  Copying: 32/32 [B] (average 31 kBps) 00:07:05.082 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ vq7sixpha38gp6txiv3z3w1pw1jhm0d8ztvirb48pr6bti0vnzxvkrhv1b5ms9zv == \v\q\7\s\i\x\p\h\a\3\8\g\p\6\t\x\i\v\3\z\3\w\1\p\w\1\j\h\m\0\d\8\z\t\v\i\r\b\4\8\p\r\6\b\t\i\0\v\n\z\x\v\k\r\h\v\1\b\5\m\s\9\z\v ]] 00:07:05.082 00:07:05.082 real 0m0.610s 00:07:05.082 user 0m0.327s 00:07:05.082 sys 0m0.161s 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.082 ************************************ 00:07:05.082 END TEST dd_flag_append_forced_aio 00:07:05.082 ************************************ 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:05.082 ************************************ 00:07:05.082 START TEST dd_flag_directory_forced_aio 00:07:05.082 ************************************ 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # local es=0 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.082 20:48:15 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.082 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:05.082 [2024-08-11 20:48:15.737258] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:05.082 [2024-08-11 20:48:15.737365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:07:05.341 [2024-08-11 20:48:15.876601] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.341 [2024-08-11 20:48:15.948607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.341 [2024-08-11 20:48:16.010534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.341 [2024-08-11 20:48:16.043993] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:05.341 [2024-08-11 20:48:16.044049] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:05.341 [2024-08-11 20:48:16.044064] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.601 [2024-08-11 20:48:16.162125] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # es=236 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@658 -- # es=108 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # case "$es" in 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@666 -- # es=1 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # local es=0 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.601 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:05.601 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:05.601 [2024-08-11 20:48:16.323029] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:05.601 [2024-08-11 20:48:16.323128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71134 ] 00:07:05.860 [2024-08-11 20:48:16.462830] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.860 [2024-08-11 20:48:16.547187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.860 [2024-08-11 20:48:16.611015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.118 [2024-08-11 20:48:16.646874] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.119 [2024-08-11 20:48:16.646945] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:06.119 [2024-08-11 20:48:16.646960] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.119 [2024-08-11 20:48:16.779814] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@649 -- # es=236 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@658 -- # es=108 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # case "$es" in 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@666 -- # es=1 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:06.119 00:07:06.119 real 0m1.196s 00:07:06.119 user 0m0.640s 00:07:06.119 sys 0m0.346s 00:07:06.119 ************************************ 00:07:06.119 END TEST dd_flag_directory_forced_aio 00:07:06.119 ************************************ 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.119 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 ************************************ 00:07:06.378 START TEST dd_flag_nofollow_forced_aio 00:07:06.378 ************************************ 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # local es=0 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.378 20:48:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.378 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:06.378 [2024-08-11 20:48:17.000666] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:06.378 [2024-08-11 20:48:17.000784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71162 ] 00:07:06.378 [2024-08-11 20:48:17.140593] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.637 [2024-08-11 20:48:17.227704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.637 [2024-08-11 20:48:17.285485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.637 [2024-08-11 20:48:17.318270] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:06.637 [2024-08-11 20:48:17.318332] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:06.637 [2024-08-11 20:48:17.318348] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.896 [2024-08-11 20:48:17.429222] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # es=216 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@658 -- # es=88 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # case "$es" in 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@666 -- # es=1 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # local es=0 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:06.896 20:48:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:06.896 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:06.896 [2024-08-11 20:48:17.571501] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:06.896 [2024-08-11 20:48:17.571623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71177 ] 00:07:07.154 [2024-08-11 20:48:17.710571] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.154 [2024-08-11 20:48:17.779134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.154 [2024-08-11 20:48:17.836476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.154 [2024-08-11 20:48:17.867576] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:07.154 [2024-08-11 20:48:17.867646] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:07.154 [2024-08-11 20:48:17.867663] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.413 [2024-08-11 20:48:17.980202] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@649 -- # es=216 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@658 -- # es=88 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # case "$es" in 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@666 -- # es=1 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.413 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.413 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:07.413 [2024-08-11 20:48:18.141054] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:07.413 [2024-08-11 20:48:18.141213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71180 ] 00:07:07.672 [2024-08-11 20:48:18.273564] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.672 [2024-08-11 20:48:18.361130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.672 [2024-08-11 20:48:18.416058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.930  Copying: 512/512 [B] (average 500 kBps) 00:07:07.930 00:07:07.930 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 1qxm3bgrdxjtaxmhzfh0a9cz3r37rxctso77mvuum5ehlbrtgs2xgtmbf2ciu1h3ihz07xsr5g8p8yvwpg9vp3kidl6kh4eb4tsmi059vxrlsyhl5nga9b5k4t6i54ro11msmxrn6muq5np272bguogk8es8879leclp557lgrvft3v8x9c870ymgu8kx78v5tlzdqcctyb5l4epfclorkcb9pfizuty8cy1cupyzrg8ty0tnbegiojgfk84dtq7nvkdyy07qnww3f60dczyhfbahynnbtg0wcwyzmacjyws9tp4ne9szk7h0aerpcm7li78roylwcbektkal0ht7tcvnulkwyvq7i6v3mz5nz7m8pzvyojpqz67z30mu4cccw8j89oq8azniwizvbgp972k4s9xuckwwtrjbrt617hn6wfyg4hl8g67ok866ixwbb2uedgftoke421jchbzzozg6cwu045vql3wz55qamkkb5lhbp3am5f8ehpsaxj0 == \1\q\x\m\3\b\g\r\d\x\j\t\a\x\m\h\z\f\h\0\a\9\c\z\3\r\3\7\r\x\c\t\s\o\7\7\m\v\u\u\m\5\e\h\l\b\r\t\g\s\2\x\g\t\m\b\f\2\c\i\u\1\h\3\i\h\z\0\7\x\s\r\5\g\8\p\8\y\v\w\p\g\9\v\p\3\k\i\d\l\6\k\h\4\e\b\4\t\s\m\i\0\5\9\v\x\r\l\s\y\h\l\5\n\g\a\9\b\5\k\4\t\6\i\5\4\r\o\1\1\m\s\m\x\r\n\6\m\u\q\5\n\p\2\7\2\b\g\u\o\g\k\8\e\s\8\8\7\9\l\e\c\l\p\5\5\7\l\g\r\v\f\t\3\v\8\x\9\c\8\7\0\y\m\g\u\8\k\x\7\8\v\5\t\l\z\d\q\c\c\t\y\b\5\l\4\e\p\f\c\l\o\r\k\c\b\9\p\f\i\z\u\t\y\8\c\y\1\c\u\p\y\z\r\g\8\t\y\0\t\n\b\e\g\i\o\j\g\f\k\8\4\d\t\q\7\n\v\k\d\y\y\0\7\q\n\w\w\3\f\6\0\d\c\z\y\h\f\b\a\h\y\n\n\b\t\g\0\w\c\w\y\z\m\a\c\j\y\w\s\9\t\p\4\n\e\9\s\z\k\7\h\0\a\e\r\p\c\m\7\l\i\7\8\r\o\y\l\w\c\b\e\k\t\k\a\l\0\h\t\7\t\c\v\n\u\l\k\w\y\v\q\7\i\6\v\3\m\z\5\n\z\7\m\8\p\z\v\y\o\j\p\q\z\6\7\z\3\0\m\u\4\c\c\c\w\8\j\8\9\o\q\8\a\z\n\i\w\i\z\v\b\g\p\9\7\2\k\4\s\9\x\u\c\k\w\w\t\r\j\b\r\t\6\1\7\h\n\6\w\f\y\g\4\h\l\8\g\6\7\o\k\8\6\6\i\x\w\b\b\2\u\e\d\g\f\t\o\k\e\4\2\1\j\c\h\b\z\z\o\z\g\6\c\w\u\0\4\5\v\q\l\3\w\z\5\5\q\a\m\k\k\b\5\l\h\b\p\3\a\m\5\f\8\e\h\p\s\a\x\j\0 ]] 00:07:07.930 00:07:07.930 real 0m1.742s 00:07:07.930 user 0m0.940s 00:07:07.930 sys 0m0.469s 00:07:07.930 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.930 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.930 ************************************ 00:07:07.930 END TEST dd_flag_nofollow_forced_aio 00:07:07.930 ************************************ 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:08.189 ************************************ 00:07:08.189 START TEST dd_flag_noatime_forced_aio 00:07:08.189 ************************************ 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1723409298 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1723409298 00:07:08.189 20:48:18 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:09.125 20:48:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.125 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:09.125 [2024-08-11 20:48:19.805941] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:09.125 [2024-08-11 20:48:19.806061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71225 ] 00:07:09.383 [2024-08-11 20:48:19.945711] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.383 [2024-08-11 20:48:20.043498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.383 [2024-08-11 20:48:20.100642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.641  Copying: 512/512 [B] (average 500 kBps) 00:07:09.641 00:07:09.641 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.641 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1723409298 )) 00:07:09.641 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.641 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1723409298 )) 00:07:09.641 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.641 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:09.641 [2024-08-11 20:48:20.405507] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:09.641 [2024-08-11 20:48:20.405635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71237 ] 00:07:09.900 [2024-08-11 20:48:20.543831] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.900 [2024-08-11 20:48:20.603662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.900 [2024-08-11 20:48:20.657040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.159  Copying: 512/512 [B] (average 500 kBps) 00:07:10.159 00:07:10.159 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:10.159 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1723409300 )) 00:07:10.159 00:07:10.159 real 0m2.167s 00:07:10.159 user 0m0.595s 00:07:10.159 sys 0m0.329s 00:07:10.159 ************************************ 00:07:10.159 END TEST dd_flag_noatime_forced_aio 00:07:10.159 ************************************ 00:07:10.159 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.159 20:48:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:10.418 ************************************ 00:07:10.418 START TEST dd_flags_misc_forced_aio 00:07:10.418 ************************************ 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.418 20:48:20 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:10.418 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:10.418 [2024-08-11 20:48:21.013680] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:10.418 [2024-08-11 20:48:21.013764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71263 ] 00:07:10.418 [2024-08-11 20:48:21.147281] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.677 [2024-08-11 20:48:21.231877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.677 [2024-08-11 20:48:21.293425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.936  Copying: 512/512 [B] (average 500 kBps) 00:07:10.936 00:07:10.936 20:48:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jce1yf6s9e54of32qjwxytle6i5aaz9qb83hxqh7yjk3alltumae5cx1du6aj6ugm7tc8fbihzmfvs5o138np08wyje99y8dziezsgd23kf5a0iboc54pt7vqiaf50jdejg0m7nhz4et2s664goybqtwio4w8kcuzsuck2ab13usj79an5yqlltq70odq7dvpe0ub6evoxkonuatwktxvspb8r1bm6r14rs8ozxej8mxvilq8qozv5pmqbx712cfotxl98hgd8bjlm9v3khbg3aiuyy8j3pykqt4g0gkmcghbdlacuwezn89hj3omn8x4srj8nm1b7qsycr9zftb9igh6cxzil2elxgb0z94b6zj6vtcyxx2z1impditx1pjvw0s0vkivvz05cldwhde6z74dcpwgwqj9m0znh5l4yxupawr5s738rnm6bq29h76246mbx1unaxgdiqcojhxq388lowlz7vm8bishm2b1q7e34gfnztkagf805m8v2z7 == \j\c\e\1\y\f\6\s\9\e\5\4\o\f\3\2\q\j\w\x\y\t\l\e\6\i\5\a\a\z\9\q\b\8\3\h\x\q\h\7\y\j\k\3\a\l\l\t\u\m\a\e\5\c\x\1\d\u\6\a\j\6\u\g\m\7\t\c\8\f\b\i\h\z\m\f\v\s\5\o\1\3\8\n\p\0\8\w\y\j\e\9\9\y\8\d\z\i\e\z\s\g\d\2\3\k\f\5\a\0\i\b\o\c\5\4\p\t\7\v\q\i\a\f\5\0\j\d\e\j\g\0\m\7\n\h\z\4\e\t\2\s\6\6\4\g\o\y\b\q\t\w\i\o\4\w\8\k\c\u\z\s\u\c\k\2\a\b\1\3\u\s\j\7\9\a\n\5\y\q\l\l\t\q\7\0\o\d\q\7\d\v\p\e\0\u\b\6\e\v\o\x\k\o\n\u\a\t\w\k\t\x\v\s\p\b\8\r\1\b\m\6\r\1\4\r\s\8\o\z\x\e\j\8\m\x\v\i\l\q\8\q\o\z\v\5\p\m\q\b\x\7\1\2\c\f\o\t\x\l\9\8\h\g\d\8\b\j\l\m\9\v\3\k\h\b\g\3\a\i\u\y\y\8\j\3\p\y\k\q\t\4\g\0\g\k\m\c\g\h\b\d\l\a\c\u\w\e\z\n\8\9\h\j\3\o\m\n\8\x\4\s\r\j\8\n\m\1\b\7\q\s\y\c\r\9\z\f\t\b\9\i\g\h\6\c\x\z\i\l\2\e\l\x\g\b\0\z\9\4\b\6\z\j\6\v\t\c\y\x\x\2\z\1\i\m\p\d\i\t\x\1\p\j\v\w\0\s\0\v\k\i\v\v\z\0\5\c\l\d\w\h\d\e\6\z\7\4\d\c\p\w\g\w\q\j\9\m\0\z\n\h\5\l\4\y\x\u\p\a\w\r\5\s\7\3\8\r\n\m\6\b\q\2\9\h\7\6\2\4\6\m\b\x\1\u\n\a\x\g\d\i\q\c\o\j\h\x\q\3\8\8\l\o\w\l\z\7\v\m\8\b\i\s\h\m\2\b\1\q\7\e\3\4\g\f\n\z\t\k\a\g\f\8\0\5\m\8\v\2\z\7 ]] 00:07:10.936 20:48:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.936 20:48:21 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:10.936 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:10.936 [2024-08-11 20:48:21.600602] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:10.936 [2024-08-11 20:48:21.600707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71275 ] 00:07:11.196 [2024-08-11 20:48:21.735788] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.196 [2024-08-11 20:48:21.815120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.196 [2024-08-11 20:48:21.871969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.456  Copying: 512/512 [B] (average 500 kBps) 00:07:11.456 00:07:11.457 20:48:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jce1yf6s9e54of32qjwxytle6i5aaz9qb83hxqh7yjk3alltumae5cx1du6aj6ugm7tc8fbihzmfvs5o138np08wyje99y8dziezsgd23kf5a0iboc54pt7vqiaf50jdejg0m7nhz4et2s664goybqtwio4w8kcuzsuck2ab13usj79an5yqlltq70odq7dvpe0ub6evoxkonuatwktxvspb8r1bm6r14rs8ozxej8mxvilq8qozv5pmqbx712cfotxl98hgd8bjlm9v3khbg3aiuyy8j3pykqt4g0gkmcghbdlacuwezn89hj3omn8x4srj8nm1b7qsycr9zftb9igh6cxzil2elxgb0z94b6zj6vtcyxx2z1impditx1pjvw0s0vkivvz05cldwhde6z74dcpwgwqj9m0znh5l4yxupawr5s738rnm6bq29h76246mbx1unaxgdiqcojhxq388lowlz7vm8bishm2b1q7e34gfnztkagf805m8v2z7 == \j\c\e\1\y\f\6\s\9\e\5\4\o\f\3\2\q\j\w\x\y\t\l\e\6\i\5\a\a\z\9\q\b\8\3\h\x\q\h\7\y\j\k\3\a\l\l\t\u\m\a\e\5\c\x\1\d\u\6\a\j\6\u\g\m\7\t\c\8\f\b\i\h\z\m\f\v\s\5\o\1\3\8\n\p\0\8\w\y\j\e\9\9\y\8\d\z\i\e\z\s\g\d\2\3\k\f\5\a\0\i\b\o\c\5\4\p\t\7\v\q\i\a\f\5\0\j\d\e\j\g\0\m\7\n\h\z\4\e\t\2\s\6\6\4\g\o\y\b\q\t\w\i\o\4\w\8\k\c\u\z\s\u\c\k\2\a\b\1\3\u\s\j\7\9\a\n\5\y\q\l\l\t\q\7\0\o\d\q\7\d\v\p\e\0\u\b\6\e\v\o\x\k\o\n\u\a\t\w\k\t\x\v\s\p\b\8\r\1\b\m\6\r\1\4\r\s\8\o\z\x\e\j\8\m\x\v\i\l\q\8\q\o\z\v\5\p\m\q\b\x\7\1\2\c\f\o\t\x\l\9\8\h\g\d\8\b\j\l\m\9\v\3\k\h\b\g\3\a\i\u\y\y\8\j\3\p\y\k\q\t\4\g\0\g\k\m\c\g\h\b\d\l\a\c\u\w\e\z\n\8\9\h\j\3\o\m\n\8\x\4\s\r\j\8\n\m\1\b\7\q\s\y\c\r\9\z\f\t\b\9\i\g\h\6\c\x\z\i\l\2\e\l\x\g\b\0\z\9\4\b\6\z\j\6\v\t\c\y\x\x\2\z\1\i\m\p\d\i\t\x\1\p\j\v\w\0\s\0\v\k\i\v\v\z\0\5\c\l\d\w\h\d\e\6\z\7\4\d\c\p\w\g\w\q\j\9\m\0\z\n\h\5\l\4\y\x\u\p\a\w\r\5\s\7\3\8\r\n\m\6\b\q\2\9\h\7\6\2\4\6\m\b\x\1\u\n\a\x\g\d\i\q\c\o\j\h\x\q\3\8\8\l\o\w\l\z\7\v\m\8\b\i\s\h\m\2\b\1\q\7\e\3\4\g\f\n\z\t\k\a\g\f\8\0\5\m\8\v\2\z\7 ]] 00:07:11.457 20:48:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.457 20:48:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:11.457 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:11.457 [2024-08-11 20:48:22.157405] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:11.457 [2024-08-11 20:48:22.157503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71278 ] 00:07:11.768 [2024-08-11 20:48:22.292538] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.768 [2024-08-11 20:48:22.372078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.768 [2024-08-11 20:48:22.427560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.029  Copying: 512/512 [B] (average 250 kBps) 00:07:12.029 00:07:12.029 20:48:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jce1yf6s9e54of32qjwxytle6i5aaz9qb83hxqh7yjk3alltumae5cx1du6aj6ugm7tc8fbihzmfvs5o138np08wyje99y8dziezsgd23kf5a0iboc54pt7vqiaf50jdejg0m7nhz4et2s664goybqtwio4w8kcuzsuck2ab13usj79an5yqlltq70odq7dvpe0ub6evoxkonuatwktxvspb8r1bm6r14rs8ozxej8mxvilq8qozv5pmqbx712cfotxl98hgd8bjlm9v3khbg3aiuyy8j3pykqt4g0gkmcghbdlacuwezn89hj3omn8x4srj8nm1b7qsycr9zftb9igh6cxzil2elxgb0z94b6zj6vtcyxx2z1impditx1pjvw0s0vkivvz05cldwhde6z74dcpwgwqj9m0znh5l4yxupawr5s738rnm6bq29h76246mbx1unaxgdiqcojhxq388lowlz7vm8bishm2b1q7e34gfnztkagf805m8v2z7 == \j\c\e\1\y\f\6\s\9\e\5\4\o\f\3\2\q\j\w\x\y\t\l\e\6\i\5\a\a\z\9\q\b\8\3\h\x\q\h\7\y\j\k\3\a\l\l\t\u\m\a\e\5\c\x\1\d\u\6\a\j\6\u\g\m\7\t\c\8\f\b\i\h\z\m\f\v\s\5\o\1\3\8\n\p\0\8\w\y\j\e\9\9\y\8\d\z\i\e\z\s\g\d\2\3\k\f\5\a\0\i\b\o\c\5\4\p\t\7\v\q\i\a\f\5\0\j\d\e\j\g\0\m\7\n\h\z\4\e\t\2\s\6\6\4\g\o\y\b\q\t\w\i\o\4\w\8\k\c\u\z\s\u\c\k\2\a\b\1\3\u\s\j\7\9\a\n\5\y\q\l\l\t\q\7\0\o\d\q\7\d\v\p\e\0\u\b\6\e\v\o\x\k\o\n\u\a\t\w\k\t\x\v\s\p\b\8\r\1\b\m\6\r\1\4\r\s\8\o\z\x\e\j\8\m\x\v\i\l\q\8\q\o\z\v\5\p\m\q\b\x\7\1\2\c\f\o\t\x\l\9\8\h\g\d\8\b\j\l\m\9\v\3\k\h\b\g\3\a\i\u\y\y\8\j\3\p\y\k\q\t\4\g\0\g\k\m\c\g\h\b\d\l\a\c\u\w\e\z\n\8\9\h\j\3\o\m\n\8\x\4\s\r\j\8\n\m\1\b\7\q\s\y\c\r\9\z\f\t\b\9\i\g\h\6\c\x\z\i\l\2\e\l\x\g\b\0\z\9\4\b\6\z\j\6\v\t\c\y\x\x\2\z\1\i\m\p\d\i\t\x\1\p\j\v\w\0\s\0\v\k\i\v\v\z\0\5\c\l\d\w\h\d\e\6\z\7\4\d\c\p\w\g\w\q\j\9\m\0\z\n\h\5\l\4\y\x\u\p\a\w\r\5\s\7\3\8\r\n\m\6\b\q\2\9\h\7\6\2\4\6\m\b\x\1\u\n\a\x\g\d\i\q\c\o\j\h\x\q\3\8\8\l\o\w\l\z\7\v\m\8\b\i\s\h\m\2\b\1\q\7\e\3\4\g\f\n\z\t\k\a\g\f\8\0\5\m\8\v\2\z\7 ]] 00:07:12.029 20:48:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.029 20:48:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:12.029 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:12.029 [2024-08-11 20:48:22.745345] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:12.029 [2024-08-11 20:48:22.745445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71291 ] 00:07:12.287 [2024-08-11 20:48:22.884405] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.287 [2024-08-11 20:48:22.957345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.287 [2024-08-11 20:48:23.012099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.545  Copying: 512/512 [B] (average 500 kBps) 00:07:12.545 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jce1yf6s9e54of32qjwxytle6i5aaz9qb83hxqh7yjk3alltumae5cx1du6aj6ugm7tc8fbihzmfvs5o138np08wyje99y8dziezsgd23kf5a0iboc54pt7vqiaf50jdejg0m7nhz4et2s664goybqtwio4w8kcuzsuck2ab13usj79an5yqlltq70odq7dvpe0ub6evoxkonuatwktxvspb8r1bm6r14rs8ozxej8mxvilq8qozv5pmqbx712cfotxl98hgd8bjlm9v3khbg3aiuyy8j3pykqt4g0gkmcghbdlacuwezn89hj3omn8x4srj8nm1b7qsycr9zftb9igh6cxzil2elxgb0z94b6zj6vtcyxx2z1impditx1pjvw0s0vkivvz05cldwhde6z74dcpwgwqj9m0znh5l4yxupawr5s738rnm6bq29h76246mbx1unaxgdiqcojhxq388lowlz7vm8bishm2b1q7e34gfnztkagf805m8v2z7 == \j\c\e\1\y\f\6\s\9\e\5\4\o\f\3\2\q\j\w\x\y\t\l\e\6\i\5\a\a\z\9\q\b\8\3\h\x\q\h\7\y\j\k\3\a\l\l\t\u\m\a\e\5\c\x\1\d\u\6\a\j\6\u\g\m\7\t\c\8\f\b\i\h\z\m\f\v\s\5\o\1\3\8\n\p\0\8\w\y\j\e\9\9\y\8\d\z\i\e\z\s\g\d\2\3\k\f\5\a\0\i\b\o\c\5\4\p\t\7\v\q\i\a\f\5\0\j\d\e\j\g\0\m\7\n\h\z\4\e\t\2\s\6\6\4\g\o\y\b\q\t\w\i\o\4\w\8\k\c\u\z\s\u\c\k\2\a\b\1\3\u\s\j\7\9\a\n\5\y\q\l\l\t\q\7\0\o\d\q\7\d\v\p\e\0\u\b\6\e\v\o\x\k\o\n\u\a\t\w\k\t\x\v\s\p\b\8\r\1\b\m\6\r\1\4\r\s\8\o\z\x\e\j\8\m\x\v\i\l\q\8\q\o\z\v\5\p\m\q\b\x\7\1\2\c\f\o\t\x\l\9\8\h\g\d\8\b\j\l\m\9\v\3\k\h\b\g\3\a\i\u\y\y\8\j\3\p\y\k\q\t\4\g\0\g\k\m\c\g\h\b\d\l\a\c\u\w\e\z\n\8\9\h\j\3\o\m\n\8\x\4\s\r\j\8\n\m\1\b\7\q\s\y\c\r\9\z\f\t\b\9\i\g\h\6\c\x\z\i\l\2\e\l\x\g\b\0\z\9\4\b\6\z\j\6\v\t\c\y\x\x\2\z\1\i\m\p\d\i\t\x\1\p\j\v\w\0\s\0\v\k\i\v\v\z\0\5\c\l\d\w\h\d\e\6\z\7\4\d\c\p\w\g\w\q\j\9\m\0\z\n\h\5\l\4\y\x\u\p\a\w\r\5\s\7\3\8\r\n\m\6\b\q\2\9\h\7\6\2\4\6\m\b\x\1\u\n\a\x\g\d\i\q\c\o\j\h\x\q\3\8\8\l\o\w\l\z\7\v\m\8\b\i\s\h\m\2\b\1\q\7\e\3\4\g\f\n\z\t\k\a\g\f\8\0\5\m\8\v\2\z\7 ]] 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.545 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.545 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:12.545 [2024-08-11 20:48:23.305268] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:12.545 [2024-08-11 20:48:23.305353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71299 ] 00:07:12.802 [2024-08-11 20:48:23.440778] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.802 [2024-08-11 20:48:23.522062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.802 [2024-08-11 20:48:23.577898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.061  Copying: 512/512 [B] (average 500 kBps) 00:07:13.061 00:07:13.061 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xxby96pgi8kx48e5itikwyxl1outkitvmz2j40ec6bvf7hb9bbmt9jfu9eo1a3wutr2bfm2ykvkvcu3nvqzgjx3mkbo3onhzohx4cd5l4n3tr2evmmzgrh12mgfjy1q8m2j7wsljm2avk3o0w9vvkm15es4a42zynlikpzlnlgu9lz8vmwq04rbb024f672sxgtjoozrlcb7iixz531g7fgjeoh9p18j4zptq1tf8l9mk4w7hpeddayo0ei39deon4loehwhs3avn7ggl5kwcyjxq4dz25z86j8yr2ja92u3xt316q5cr37z4q4lgxp0qhll34ar6lgy10m0sbz35t4n7hawjqjgz4jxgmt64typqmkgs9ruy3a9rdkwqimxecm0bt7l4d4jqtpajen6tc37w1n15v19g4n6xh4y8o0344t78kv141iso62iegzbkbooqqc25ci33fnp7in8nqdfg09bp0614dmjajxyd847ulrtv9wl0jtdsjay6lvn == \x\x\b\y\9\6\p\g\i\8\k\x\4\8\e\5\i\t\i\k\w\y\x\l\1\o\u\t\k\i\t\v\m\z\2\j\4\0\e\c\6\b\v\f\7\h\b\9\b\b\m\t\9\j\f\u\9\e\o\1\a\3\w\u\t\r\2\b\f\m\2\y\k\v\k\v\c\u\3\n\v\q\z\g\j\x\3\m\k\b\o\3\o\n\h\z\o\h\x\4\c\d\5\l\4\n\3\t\r\2\e\v\m\m\z\g\r\h\1\2\m\g\f\j\y\1\q\8\m\2\j\7\w\s\l\j\m\2\a\v\k\3\o\0\w\9\v\v\k\m\1\5\e\s\4\a\4\2\z\y\n\l\i\k\p\z\l\n\l\g\u\9\l\z\8\v\m\w\q\0\4\r\b\b\0\2\4\f\6\7\2\s\x\g\t\j\o\o\z\r\l\c\b\7\i\i\x\z\5\3\1\g\7\f\g\j\e\o\h\9\p\1\8\j\4\z\p\t\q\1\t\f\8\l\9\m\k\4\w\7\h\p\e\d\d\a\y\o\0\e\i\3\9\d\e\o\n\4\l\o\e\h\w\h\s\3\a\v\n\7\g\g\l\5\k\w\c\y\j\x\q\4\d\z\2\5\z\8\6\j\8\y\r\2\j\a\9\2\u\3\x\t\3\1\6\q\5\c\r\3\7\z\4\q\4\l\g\x\p\0\q\h\l\l\3\4\a\r\6\l\g\y\1\0\m\0\s\b\z\3\5\t\4\n\7\h\a\w\j\q\j\g\z\4\j\x\g\m\t\6\4\t\y\p\q\m\k\g\s\9\r\u\y\3\a\9\r\d\k\w\q\i\m\x\e\c\m\0\b\t\7\l\4\d\4\j\q\t\p\a\j\e\n\6\t\c\3\7\w\1\n\1\5\v\1\9\g\4\n\6\x\h\4\y\8\o\0\3\4\4\t\7\8\k\v\1\4\1\i\s\o\6\2\i\e\g\z\b\k\b\o\o\q\q\c\2\5\c\i\3\3\f\n\p\7\i\n\8\n\q\d\f\g\0\9\b\p\0\6\1\4\d\m\j\a\j\x\y\d\8\4\7\u\l\r\t\v\9\w\l\0\j\t\d\s\j\a\y\6\l\v\n ]] 00:07:13.061 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.061 20:48:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:13.319 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:13.319 [2024-08-11 20:48:23.878685] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:13.319 [2024-08-11 20:48:23.878807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ] 00:07:13.319 [2024-08-11 20:48:24.018698] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.319 [2024-08-11 20:48:24.091994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.578 [2024-08-11 20:48:24.149824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.836  Copying: 512/512 [B] (average 500 kBps) 00:07:13.836 00:07:13.836 20:48:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xxby96pgi8kx48e5itikwyxl1outkitvmz2j40ec6bvf7hb9bbmt9jfu9eo1a3wutr2bfm2ykvkvcu3nvqzgjx3mkbo3onhzohx4cd5l4n3tr2evmmzgrh12mgfjy1q8m2j7wsljm2avk3o0w9vvkm15es4a42zynlikpzlnlgu9lz8vmwq04rbb024f672sxgtjoozrlcb7iixz531g7fgjeoh9p18j4zptq1tf8l9mk4w7hpeddayo0ei39deon4loehwhs3avn7ggl5kwcyjxq4dz25z86j8yr2ja92u3xt316q5cr37z4q4lgxp0qhll34ar6lgy10m0sbz35t4n7hawjqjgz4jxgmt64typqmkgs9ruy3a9rdkwqimxecm0bt7l4d4jqtpajen6tc37w1n15v19g4n6xh4y8o0344t78kv141iso62iegzbkbooqqc25ci33fnp7in8nqdfg09bp0614dmjajxyd847ulrtv9wl0jtdsjay6lvn == \x\x\b\y\9\6\p\g\i\8\k\x\4\8\e\5\i\t\i\k\w\y\x\l\1\o\u\t\k\i\t\v\m\z\2\j\4\0\e\c\6\b\v\f\7\h\b\9\b\b\m\t\9\j\f\u\9\e\o\1\a\3\w\u\t\r\2\b\f\m\2\y\k\v\k\v\c\u\3\n\v\q\z\g\j\x\3\m\k\b\o\3\o\n\h\z\o\h\x\4\c\d\5\l\4\n\3\t\r\2\e\v\m\m\z\g\r\h\1\2\m\g\f\j\y\1\q\8\m\2\j\7\w\s\l\j\m\2\a\v\k\3\o\0\w\9\v\v\k\m\1\5\e\s\4\a\4\2\z\y\n\l\i\k\p\z\l\n\l\g\u\9\l\z\8\v\m\w\q\0\4\r\b\b\0\2\4\f\6\7\2\s\x\g\t\j\o\o\z\r\l\c\b\7\i\i\x\z\5\3\1\g\7\f\g\j\e\o\h\9\p\1\8\j\4\z\p\t\q\1\t\f\8\l\9\m\k\4\w\7\h\p\e\d\d\a\y\o\0\e\i\3\9\d\e\o\n\4\l\o\e\h\w\h\s\3\a\v\n\7\g\g\l\5\k\w\c\y\j\x\q\4\d\z\2\5\z\8\6\j\8\y\r\2\j\a\9\2\u\3\x\t\3\1\6\q\5\c\r\3\7\z\4\q\4\l\g\x\p\0\q\h\l\l\3\4\a\r\6\l\g\y\1\0\m\0\s\b\z\3\5\t\4\n\7\h\a\w\j\q\j\g\z\4\j\x\g\m\t\6\4\t\y\p\q\m\k\g\s\9\r\u\y\3\a\9\r\d\k\w\q\i\m\x\e\c\m\0\b\t\7\l\4\d\4\j\q\t\p\a\j\e\n\6\t\c\3\7\w\1\n\1\5\v\1\9\g\4\n\6\x\h\4\y\8\o\0\3\4\4\t\7\8\k\v\1\4\1\i\s\o\6\2\i\e\g\z\b\k\b\o\o\q\q\c\2\5\c\i\3\3\f\n\p\7\i\n\8\n\q\d\f\g\0\9\b\p\0\6\1\4\d\m\j\a\j\x\y\d\8\4\7\u\l\r\t\v\9\w\l\0\j\t\d\s\j\a\y\6\l\v\n ]] 00:07:13.836 20:48:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.836 20:48:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.836 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:13.836 [2024-08-11 20:48:24.433515] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:13.836 [2024-08-11 20:48:24.433631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71314 ] 00:07:13.836 [2024-08-11 20:48:24.571546] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.095 [2024-08-11 20:48:24.643766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.095 [2024-08-11 20:48:24.698950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.353  Copying: 512/512 [B] (average 250 kBps) 00:07:14.353 00:07:14.354 20:48:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xxby96pgi8kx48e5itikwyxl1outkitvmz2j40ec6bvf7hb9bbmt9jfu9eo1a3wutr2bfm2ykvkvcu3nvqzgjx3mkbo3onhzohx4cd5l4n3tr2evmmzgrh12mgfjy1q8m2j7wsljm2avk3o0w9vvkm15es4a42zynlikpzlnlgu9lz8vmwq04rbb024f672sxgtjoozrlcb7iixz531g7fgjeoh9p18j4zptq1tf8l9mk4w7hpeddayo0ei39deon4loehwhs3avn7ggl5kwcyjxq4dz25z86j8yr2ja92u3xt316q5cr37z4q4lgxp0qhll34ar6lgy10m0sbz35t4n7hawjqjgz4jxgmt64typqmkgs9ruy3a9rdkwqimxecm0bt7l4d4jqtpajen6tc37w1n15v19g4n6xh4y8o0344t78kv141iso62iegzbkbooqqc25ci33fnp7in8nqdfg09bp0614dmjajxyd847ulrtv9wl0jtdsjay6lvn == \x\x\b\y\9\6\p\g\i\8\k\x\4\8\e\5\i\t\i\k\w\y\x\l\1\o\u\t\k\i\t\v\m\z\2\j\4\0\e\c\6\b\v\f\7\h\b\9\b\b\m\t\9\j\f\u\9\e\o\1\a\3\w\u\t\r\2\b\f\m\2\y\k\v\k\v\c\u\3\n\v\q\z\g\j\x\3\m\k\b\o\3\o\n\h\z\o\h\x\4\c\d\5\l\4\n\3\t\r\2\e\v\m\m\z\g\r\h\1\2\m\g\f\j\y\1\q\8\m\2\j\7\w\s\l\j\m\2\a\v\k\3\o\0\w\9\v\v\k\m\1\5\e\s\4\a\4\2\z\y\n\l\i\k\p\z\l\n\l\g\u\9\l\z\8\v\m\w\q\0\4\r\b\b\0\2\4\f\6\7\2\s\x\g\t\j\o\o\z\r\l\c\b\7\i\i\x\z\5\3\1\g\7\f\g\j\e\o\h\9\p\1\8\j\4\z\p\t\q\1\t\f\8\l\9\m\k\4\w\7\h\p\e\d\d\a\y\o\0\e\i\3\9\d\e\o\n\4\l\o\e\h\w\h\s\3\a\v\n\7\g\g\l\5\k\w\c\y\j\x\q\4\d\z\2\5\z\8\6\j\8\y\r\2\j\a\9\2\u\3\x\t\3\1\6\q\5\c\r\3\7\z\4\q\4\l\g\x\p\0\q\h\l\l\3\4\a\r\6\l\g\y\1\0\m\0\s\b\z\3\5\t\4\n\7\h\a\w\j\q\j\g\z\4\j\x\g\m\t\6\4\t\y\p\q\m\k\g\s\9\r\u\y\3\a\9\r\d\k\w\q\i\m\x\e\c\m\0\b\t\7\l\4\d\4\j\q\t\p\a\j\e\n\6\t\c\3\7\w\1\n\1\5\v\1\9\g\4\n\6\x\h\4\y\8\o\0\3\4\4\t\7\8\k\v\1\4\1\i\s\o\6\2\i\e\g\z\b\k\b\o\o\q\q\c\2\5\c\i\3\3\f\n\p\7\i\n\8\n\q\d\f\g\0\9\b\p\0\6\1\4\d\m\j\a\j\x\y\d\8\4\7\u\l\r\t\v\9\w\l\0\j\t\d\s\j\a\y\6\l\v\n ]] 00:07:14.354 20:48:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.354 20:48:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.354 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:14.354 [2024-08-11 20:48:24.974678] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:14.354 [2024-08-11 20:48:24.974775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71321 ] 00:07:14.354 [2024-08-11 20:48:25.112112] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.613 [2024-08-11 20:48:25.173322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.613 [2024-08-11 20:48:25.227123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.872  Copying: 512/512 [B] (average 500 kBps) 00:07:14.872 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xxby96pgi8kx48e5itikwyxl1outkitvmz2j40ec6bvf7hb9bbmt9jfu9eo1a3wutr2bfm2ykvkvcu3nvqzgjx3mkbo3onhzohx4cd5l4n3tr2evmmzgrh12mgfjy1q8m2j7wsljm2avk3o0w9vvkm15es4a42zynlikpzlnlgu9lz8vmwq04rbb024f672sxgtjoozrlcb7iixz531g7fgjeoh9p18j4zptq1tf8l9mk4w7hpeddayo0ei39deon4loehwhs3avn7ggl5kwcyjxq4dz25z86j8yr2ja92u3xt316q5cr37z4q4lgxp0qhll34ar6lgy10m0sbz35t4n7hawjqjgz4jxgmt64typqmkgs9ruy3a9rdkwqimxecm0bt7l4d4jqtpajen6tc37w1n15v19g4n6xh4y8o0344t78kv141iso62iegzbkbooqqc25ci33fnp7in8nqdfg09bp0614dmjajxyd847ulrtv9wl0jtdsjay6lvn == \x\x\b\y\9\6\p\g\i\8\k\x\4\8\e\5\i\t\i\k\w\y\x\l\1\o\u\t\k\i\t\v\m\z\2\j\4\0\e\c\6\b\v\f\7\h\b\9\b\b\m\t\9\j\f\u\9\e\o\1\a\3\w\u\t\r\2\b\f\m\2\y\k\v\k\v\c\u\3\n\v\q\z\g\j\x\3\m\k\b\o\3\o\n\h\z\o\h\x\4\c\d\5\l\4\n\3\t\r\2\e\v\m\m\z\g\r\h\1\2\m\g\f\j\y\1\q\8\m\2\j\7\w\s\l\j\m\2\a\v\k\3\o\0\w\9\v\v\k\m\1\5\e\s\4\a\4\2\z\y\n\l\i\k\p\z\l\n\l\g\u\9\l\z\8\v\m\w\q\0\4\r\b\b\0\2\4\f\6\7\2\s\x\g\t\j\o\o\z\r\l\c\b\7\i\i\x\z\5\3\1\g\7\f\g\j\e\o\h\9\p\1\8\j\4\z\p\t\q\1\t\f\8\l\9\m\k\4\w\7\h\p\e\d\d\a\y\o\0\e\i\3\9\d\e\o\n\4\l\o\e\h\w\h\s\3\a\v\n\7\g\g\l\5\k\w\c\y\j\x\q\4\d\z\2\5\z\8\6\j\8\y\r\2\j\a\9\2\u\3\x\t\3\1\6\q\5\c\r\3\7\z\4\q\4\l\g\x\p\0\q\h\l\l\3\4\a\r\6\l\g\y\1\0\m\0\s\b\z\3\5\t\4\n\7\h\a\w\j\q\j\g\z\4\j\x\g\m\t\6\4\t\y\p\q\m\k\g\s\9\r\u\y\3\a\9\r\d\k\w\q\i\m\x\e\c\m\0\b\t\7\l\4\d\4\j\q\t\p\a\j\e\n\6\t\c\3\7\w\1\n\1\5\v\1\9\g\4\n\6\x\h\4\y\8\o\0\3\4\4\t\7\8\k\v\1\4\1\i\s\o\6\2\i\e\g\z\b\k\b\o\o\q\q\c\2\5\c\i\3\3\f\n\p\7\i\n\8\n\q\d\f\g\0\9\b\p\0\6\1\4\d\m\j\a\j\x\y\d\8\4\7\u\l\r\t\v\9\w\l\0\j\t\d\s\j\a\y\6\l\v\n ]] 00:07:14.872 00:07:14.872 real 0m4.505s 00:07:14.872 user 0m2.285s 00:07:14.872 sys 0m1.219s 00:07:14.872 ************************************ 00:07:14.872 END TEST dd_flags_misc_forced_aio 00:07:14.872 ************************************ 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:14.872 ************************************ 00:07:14.872 END TEST spdk_dd_posix 00:07:14.872 ************************************ 00:07:14.872 00:07:14.872 real 0m20.726s 00:07:14.872 user 0m9.683s 00:07:14.872 sys 0m6.845s 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.872 20:48:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:14.872 20:48:25 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:14.872 20:48:25 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.872 20:48:25 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.872 20:48:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.872 ************************************ 00:07:14.872 START TEST spdk_dd_malloc 00:07:14.872 ************************************ 00:07:14.872 20:48:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:14.872 * Looking for test storage... 00:07:14.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.872 20:48:25 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:15.132 ************************************ 00:07:15.132 START TEST dd_malloc_copy 00:07:15.132 ************************************ 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:15.132 20:48:25 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.132 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:15.132 [2024-08-11 20:48:25.716670] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:15.132 [2024-08-11 20:48:25.716779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71390 ] 00:07:15.132 { 00:07:15.132 "subsystems": [ 00:07:15.132 { 00:07:15.132 "subsystem": "bdev", 00:07:15.132 "config": [ 00:07:15.132 { 00:07:15.132 "params": { 00:07:15.132 "block_size": 512, 00:07:15.132 "num_blocks": 1048576, 00:07:15.132 "name": "malloc0" 00:07:15.132 }, 00:07:15.132 "method": "bdev_malloc_create" 00:07:15.132 }, 00:07:15.133 { 00:07:15.133 "params": { 00:07:15.133 "block_size": 512, 00:07:15.133 "num_blocks": 1048576, 00:07:15.133 "name": "malloc1" 00:07:15.133 }, 00:07:15.133 "method": "bdev_malloc_create" 00:07:15.133 }, 00:07:15.133 { 00:07:15.133 "method": "bdev_wait_for_examine" 00:07:15.133 } 00:07:15.133 ] 00:07:15.133 } 00:07:15.133 ] 00:07:15.133 } 00:07:15.133 [2024-08-11 20:48:25.857522] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.391 [2024-08-11 20:48:25.927254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.392 [2024-08-11 20:48:25.986470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.529  Copying: 231/512 [MB] (231 MBps) Copying: 461/512 [MB] (229 MBps) Copying: 512/512 [MB] (average 230 MBps) 00:07:18.529 00:07:18.529 20:48:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:18.529 20:48:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:18.529 20:48:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:18.529 20:48:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:18.529 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:18.529 [2024-08-11 20:48:29.214154] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:18.529 [2024-08-11 20:48:29.214268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71437 ] 00:07:18.529 { 00:07:18.529 "subsystems": [ 00:07:18.529 { 00:07:18.529 "subsystem": "bdev", 00:07:18.529 "config": [ 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "block_size": 512, 00:07:18.529 "num_blocks": 1048576, 00:07:18.529 "name": "malloc0" 00:07:18.529 }, 00:07:18.529 "method": "bdev_malloc_create" 00:07:18.529 }, 00:07:18.529 { 00:07:18.529 "params": { 00:07:18.529 "block_size": 512, 00:07:18.529 "num_blocks": 1048576, 00:07:18.529 "name": "malloc1" 00:07:18.529 }, 00:07:18.529 "method": "bdev_malloc_create" 00:07:18.529 }, 00:07:18.529 { 00:07:18.529 "method": "bdev_wait_for_examine" 00:07:18.529 } 00:07:18.529 ] 00:07:18.529 } 00:07:18.529 ] 00:07:18.529 } 00:07:18.788 [2024-08-11 20:48:29.351373] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.788 [2024-08-11 20:48:29.417432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.788 [2024-08-11 20:48:29.473114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.932  Copying: 228/512 [MB] (228 MBps) Copying: 434/512 [MB] (206 MBps) Copying: 512/512 [MB] (average 220 MBps) 00:07:21.932 00:07:21.932 00:07:21.932 real 0m7.031s 00:07:21.932 user 0m5.995s 00:07:21.932 sys 0m0.888s 00:07:21.932 20:48:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.932 ************************************ 00:07:21.932 END TEST dd_malloc_copy 00:07:21.932 ************************************ 00:07:21.932 20:48:32 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.191 00:07:22.191 real 0m7.173s 00:07:22.191 user 0m6.050s 00:07:22.191 sys 0m0.974s 00:07:22.191 20:48:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.191 20:48:32 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:22.191 ************************************ 00:07:22.191 END TEST spdk_dd_malloc 00:07:22.191 ************************************ 00:07:22.191 20:48:32 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:22.191 20:48:32 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:22.191 20:48:32 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.191 20:48:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:22.191 ************************************ 00:07:22.191 START TEST spdk_dd_bdev_to_bdev 00:07:22.191 ************************************ 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:22.191 * Looking for test storage... 00:07:22.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.191 ************************************ 00:07:22.191 START TEST dd_inflate_file 00:07:22.191 ************************************ 00:07:22.191 20:48:32 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:22.191 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:22.191 [2024-08-11 20:48:32.930998] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:22.191 [2024-08-11 20:48:32.931613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71542 ] 00:07:22.450 [2024-08-11 20:48:33.065076] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.450 [2024-08-11 20:48:33.136418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.450 [2024-08-11 20:48:33.188198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.707  Copying: 64/64 [MB] (average 1560 MBps) 00:07:22.707 00:07:22.707 00:07:22.707 real 0m0.581s 00:07:22.707 user 0m0.322s 00:07:22.707 sys 0m0.308s 00:07:22.707 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.707 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 ************************************ 00:07:22.707 END TEST dd_inflate_file 00:07:22.708 ************************************ 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.966 ************************************ 00:07:22.966 START TEST dd_copy_to_out_bdev 00:07:22.966 ************************************ 00:07:22.966 20:48:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:22.966 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:22.966 [2024-08-11 20:48:33.572866] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:22.966 [2024-08-11 20:48:33.572968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71581 ] 00:07:22.966 { 00:07:22.966 "subsystems": [ 00:07:22.966 { 00:07:22.966 "subsystem": "bdev", 00:07:22.966 "config": [ 00:07:22.966 { 00:07:22.966 "params": { 00:07:22.966 "trtype": "pcie", 00:07:22.966 "traddr": "0000:00:10.0", 00:07:22.966 "name": "Nvme0" 00:07:22.966 }, 00:07:22.966 "method": "bdev_nvme_attach_controller" 00:07:22.966 }, 00:07:22.966 { 00:07:22.966 "params": { 00:07:22.966 "trtype": "pcie", 00:07:22.966 "traddr": "0000:00:11.0", 00:07:22.966 "name": "Nvme1" 00:07:22.966 }, 00:07:22.966 "method": "bdev_nvme_attach_controller" 00:07:22.966 }, 00:07:22.966 { 00:07:22.966 "method": "bdev_wait_for_examine" 00:07:22.966 } 00:07:22.966 ] 00:07:22.966 } 00:07:22.966 ] 00:07:22.966 } 00:07:22.966 [2024-08-11 20:48:33.709872] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.225 [2024-08-11 20:48:33.767831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.225 [2024-08-11 20:48:33.820717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.861  Copying: 50/64 [MB] (50 MBps) Copying: 64/64 [MB] (average 50 MBps) 00:07:24.861 00:07:24.861 00:07:24.861 real 0m2.060s 00:07:24.861 user 0m1.786s 00:07:24.861 sys 0m1.733s 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.861 ************************************ 00:07:24.861 END TEST dd_copy_to_out_bdev 00:07:24.861 ************************************ 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.861 ************************************ 00:07:24.861 START TEST dd_offset_magic 00:07:24.861 ************************************ 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:24.861 20:48:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:25.120 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:25.120 [2024-08-11 20:48:35.681481] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:25.120 [2024-08-11 20:48:35.682220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71626 ] 00:07:25.120 { 00:07:25.120 "subsystems": [ 00:07:25.120 { 00:07:25.120 "subsystem": "bdev", 00:07:25.120 "config": [ 00:07:25.120 { 00:07:25.120 "params": { 00:07:25.120 "trtype": "pcie", 00:07:25.120 "traddr": "0000:00:10.0", 00:07:25.120 "name": "Nvme0" 00:07:25.120 }, 00:07:25.120 "method": "bdev_nvme_attach_controller" 00:07:25.120 }, 00:07:25.120 { 00:07:25.120 "params": { 00:07:25.120 "trtype": "pcie", 00:07:25.120 "traddr": "0000:00:11.0", 00:07:25.120 "name": "Nvme1" 00:07:25.120 }, 00:07:25.120 "method": "bdev_nvme_attach_controller" 00:07:25.120 }, 00:07:25.120 { 00:07:25.120 "method": "bdev_wait_for_examine" 00:07:25.120 } 00:07:25.120 ] 00:07:25.120 } 00:07:25.120 ] 00:07:25.120 } 00:07:25.120 [2024-08-11 20:48:35.825908] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.120 [2024-08-11 20:48:35.879781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.378 [2024-08-11 20:48:35.932601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.896  Copying: 65/65 [MB] (average 833 MBps) 00:07:25.896 00:07:25.896 20:48:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:25.896 20:48:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:25.896 20:48:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:25.896 20:48:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:25.896 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:25.896 [2024-08-11 20:48:36.523616] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:25.896 [2024-08-11 20:48:36.524202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71640 ] 00:07:25.896 { 00:07:25.896 "subsystems": [ 00:07:25.896 { 00:07:25.896 "subsystem": "bdev", 00:07:25.896 "config": [ 00:07:25.896 { 00:07:25.896 "params": { 00:07:25.896 "trtype": "pcie", 00:07:25.896 "traddr": "0000:00:10.0", 00:07:25.896 "name": "Nvme0" 00:07:25.896 }, 00:07:25.896 "method": "bdev_nvme_attach_controller" 00:07:25.896 }, 00:07:25.896 { 00:07:25.896 "params": { 00:07:25.896 "trtype": "pcie", 00:07:25.896 "traddr": "0000:00:11.0", 00:07:25.896 "name": "Nvme1" 00:07:25.896 }, 00:07:25.896 "method": "bdev_nvme_attach_controller" 00:07:25.896 }, 00:07:25.896 { 00:07:25.896 "method": "bdev_wait_for_examine" 00:07:25.896 } 00:07:25.896 ] 00:07:25.896 } 00:07:25.896 ] 00:07:25.896 } 00:07:25.896 [2024-08-11 20:48:36.664197] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.155 [2024-08-11 20:48:36.720330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.155 [2024-08-11 20:48:36.772646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.413  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:26.413 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:26.413 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:26.413 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:26.413 [2024-08-11 20:48:37.176169] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:26.413 [2024-08-11 20:48:37.176272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71657 ] 00:07:26.672 { 00:07:26.672 "subsystems": [ 00:07:26.672 { 00:07:26.672 "subsystem": "bdev", 00:07:26.672 "config": [ 00:07:26.672 { 00:07:26.672 "params": { 00:07:26.672 "trtype": "pcie", 00:07:26.672 "traddr": "0000:00:10.0", 00:07:26.672 "name": "Nvme0" 00:07:26.672 }, 00:07:26.672 "method": "bdev_nvme_attach_controller" 00:07:26.672 }, 00:07:26.672 { 00:07:26.672 "params": { 00:07:26.672 "trtype": "pcie", 00:07:26.672 "traddr": "0000:00:11.0", 00:07:26.672 "name": "Nvme1" 00:07:26.672 }, 00:07:26.672 "method": "bdev_nvme_attach_controller" 00:07:26.672 }, 00:07:26.672 { 00:07:26.672 "method": "bdev_wait_for_examine" 00:07:26.672 } 00:07:26.672 ] 00:07:26.672 } 00:07:26.672 ] 00:07:26.672 } 00:07:26.672 [2024-08-11 20:48:37.305859] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.672 [2024-08-11 20:48:37.369927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.672 [2024-08-11 20:48:37.422119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.189  Copying: 65/65 [MB] (average 878 MBps) 00:07:27.189 00:07:27.189 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:27.189 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:27.189 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:27.189 20:48:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:27.189 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:27.189 [2024-08-11 20:48:37.962341] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:27.189 [2024-08-11 20:48:37.962455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71677 ] 00:07:27.448 { 00:07:27.448 "subsystems": [ 00:07:27.448 { 00:07:27.448 "subsystem": "bdev", 00:07:27.448 "config": [ 00:07:27.448 { 00:07:27.448 "params": { 00:07:27.448 "trtype": "pcie", 00:07:27.448 "traddr": "0000:00:10.0", 00:07:27.448 "name": "Nvme0" 00:07:27.448 }, 00:07:27.448 "method": "bdev_nvme_attach_controller" 00:07:27.448 }, 00:07:27.448 { 00:07:27.448 "params": { 00:07:27.448 "trtype": "pcie", 00:07:27.448 "traddr": "0000:00:11.0", 00:07:27.448 "name": "Nvme1" 00:07:27.448 }, 00:07:27.448 "method": "bdev_nvme_attach_controller" 00:07:27.448 }, 00:07:27.448 { 00:07:27.448 "method": "bdev_wait_for_examine" 00:07:27.448 } 00:07:27.448 ] 00:07:27.448 } 00:07:27.448 ] 00:07:27.448 } 00:07:27.448 [2024-08-11 20:48:38.099512] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.448 [2024-08-11 20:48:38.154854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.448 [2024-08-11 20:48:38.208686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.966  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:27.966 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:27.966 00:07:27.966 real 0m2.958s 00:07:27.966 user 0m2.110s 00:07:27.966 sys 0m0.911s 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.966 ************************************ 00:07:27.966 END TEST dd_offset_magic 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:27.966 ************************************ 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:27.966 20:48:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:27.966 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:27.966 [2024-08-11 20:48:38.679373] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:27.966 [2024-08-11 20:48:38.679517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71714 ] 00:07:27.966 { 00:07:27.966 "subsystems": [ 00:07:27.966 { 00:07:27.966 "subsystem": "bdev", 00:07:27.966 "config": [ 00:07:27.966 { 00:07:27.966 "params": { 00:07:27.966 "trtype": "pcie", 00:07:27.966 "traddr": "0000:00:10.0", 00:07:27.966 "name": "Nvme0" 00:07:27.966 }, 00:07:27.966 "method": "bdev_nvme_attach_controller" 00:07:27.966 }, 00:07:27.966 { 00:07:27.966 "params": { 00:07:27.966 "trtype": "pcie", 00:07:27.966 "traddr": "0000:00:11.0", 00:07:27.966 "name": "Nvme1" 00:07:27.966 }, 00:07:27.966 "method": "bdev_nvme_attach_controller" 00:07:27.966 }, 00:07:27.966 { 00:07:27.966 "method": "bdev_wait_for_examine" 00:07:27.966 } 00:07:27.966 ] 00:07:27.966 } 00:07:27.966 ] 00:07:27.966 } 00:07:28.225 [2024-08-11 20:48:38.812743] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.225 [2024-08-11 20:48:38.871028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.225 [2024-08-11 20:48:38.925161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.742  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:28.742 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:28.742 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:28.742 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:28.742 [2024-08-11 20:48:39.344799] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:28.742 [2024-08-11 20:48:39.344911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71724 ] 00:07:28.742 { 00:07:28.742 "subsystems": [ 00:07:28.742 { 00:07:28.742 "subsystem": "bdev", 00:07:28.742 "config": [ 00:07:28.742 { 00:07:28.742 "params": { 00:07:28.742 "trtype": "pcie", 00:07:28.742 "traddr": "0000:00:10.0", 00:07:28.742 "name": "Nvme0" 00:07:28.742 }, 00:07:28.742 "method": "bdev_nvme_attach_controller" 00:07:28.742 }, 00:07:28.742 { 00:07:28.742 "params": { 00:07:28.742 "trtype": "pcie", 00:07:28.742 "traddr": "0000:00:11.0", 00:07:28.742 "name": "Nvme1" 00:07:28.742 }, 00:07:28.742 "method": "bdev_nvme_attach_controller" 00:07:28.742 }, 00:07:28.742 { 00:07:28.742 "method": "bdev_wait_for_examine" 00:07:28.742 } 00:07:28.742 ] 00:07:28.742 } 00:07:28.742 ] 00:07:28.742 } 00:07:28.742 [2024-08-11 20:48:39.482814] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.001 [2024-08-11 20:48:39.539200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.001 [2024-08-11 20:48:39.590480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.259  Copying: 5120/5120 [kB] (average 714 MBps) 00:07:29.259 00:07:29.259 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:29.259 00:07:29.259 real 0m7.202s 00:07:29.259 user 0m5.236s 00:07:29.259 sys 0m3.625s 00:07:29.259 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.259 20:48:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.259 ************************************ 00:07:29.259 END TEST spdk_dd_bdev_to_bdev 00:07:29.259 ************************************ 00:07:29.259 20:48:40 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:29.259 20:48:40 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:29.259 20:48:40 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.259 20:48:40 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.259 20:48:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:29.519 ************************************ 00:07:29.519 START TEST spdk_dd_uring 00:07:29.519 ************************************ 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:29.519 * Looking for test storage... 00:07:29.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:29.519 ************************************ 00:07:29.519 START TEST dd_uring_copy 00:07:29.519 ************************************ 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1121 -- # uring_zram_copy 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=3uqy866m50kfj2crtx7qh9lsu8wk5ce1c9wb8p2rsfkvmac2lq212a66xrf9yqrnrhl9n311lceps1nwehxsa242nyk3609jkzwj7e5df9rbwtg6dnjeb8ms8897b226vgr1cutzlzrv6sc81bktt6x89x2j2mm4ocfygv9arn7dz7772wkoqp68rmoe1h6glquklfuqtffghg2p15bic1ho4s78q6gc2nux9f9h7sgklaqdrzxqaadri4ffys9y2012rn93swqyy7g3711kl63u63bdunhuy9filik4xl03a1roxrva0z49vg2rl68poe3dse2l3oth93iltwgj95ik7hko3bdtfw8n8o1jnevabxgmxk4et82kaslfosg2vgbbs98zph1is9u9iieaouz2f7z9hyh9ka31i2zx0ids7tyq3qqaqm0z8ydkx9osunyj6c9m3vi6tuny9pzvtdydaq3k7wnae7vm9ndcnpn5ahtn4ysbwrww1d0nept2nmaoehrumw2rwv0oh3d3slsxetciknj8tt6rxtnaz4k4yj9ib4lnczz5blbyffys639lne4oi5plf3rrxw2iitojfwgn8pe8h9sk6nj7xjqbbndxvnevrddqj8vd6kmb1eiff6ue4p70jhi2p84hx1ya4z5tmwkwcg0joaoal089o7ozu9rcu1ke7848cbcj8i9obve9tcih3gthw6pshdva4hey6cbfbn19ctrm102sq2ha2ezv8449qf4vwagp3uwhdye89tloegujvz0co74lfklwr5fqodz9v8wjar0asuzfmhryt9mgtppdl85desssxdal0q4im3np08tzvosl4jm4g2o57l2e1tzwrrmkw3xoh6mpjpa32dzv2y4s5cqyi9w52u3iz221sm8uxh9ayuycvj3gyfdqtyswkcy41283mg8ztquakddm9is4yvecy98tql07eotpfhvwmzpbu1xap4hvjusce28hbl7xhzcb5z7i4fphc5bf7247 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 3uqy866m50kfj2crtx7qh9lsu8wk5ce1c9wb8p2rsfkvmac2lq212a66xrf9yqrnrhl9n311lceps1nwehxsa242nyk3609jkzwj7e5df9rbwtg6dnjeb8ms8897b226vgr1cutzlzrv6sc81bktt6x89x2j2mm4ocfygv9arn7dz7772wkoqp68rmoe1h6glquklfuqtffghg2p15bic1ho4s78q6gc2nux9f9h7sgklaqdrzxqaadri4ffys9y2012rn93swqyy7g3711kl63u63bdunhuy9filik4xl03a1roxrva0z49vg2rl68poe3dse2l3oth93iltwgj95ik7hko3bdtfw8n8o1jnevabxgmxk4et82kaslfosg2vgbbs98zph1is9u9iieaouz2f7z9hyh9ka31i2zx0ids7tyq3qqaqm0z8ydkx9osunyj6c9m3vi6tuny9pzvtdydaq3k7wnae7vm9ndcnpn5ahtn4ysbwrww1d0nept2nmaoehrumw2rwv0oh3d3slsxetciknj8tt6rxtnaz4k4yj9ib4lnczz5blbyffys639lne4oi5plf3rrxw2iitojfwgn8pe8h9sk6nj7xjqbbndxvnevrddqj8vd6kmb1eiff6ue4p70jhi2p84hx1ya4z5tmwkwcg0joaoal089o7ozu9rcu1ke7848cbcj8i9obve9tcih3gthw6pshdva4hey6cbfbn19ctrm102sq2ha2ezv8449qf4vwagp3uwhdye89tloegujvz0co74lfklwr5fqodz9v8wjar0asuzfmhryt9mgtppdl85desssxdal0q4im3np08tzvosl4jm4g2o57l2e1tzwrrmkw3xoh6mpjpa32dzv2y4s5cqyi9w52u3iz221sm8uxh9ayuycvj3gyfdqtyswkcy41283mg8ztquakddm9is4yvecy98tql07eotpfhvwmzpbu1xap4hvjusce28hbl7xhzcb5z7i4fphc5bf7247 00:07:29.519 20:48:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:29.519 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:29.519 [2024-08-11 20:48:40.210846] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:29.519 [2024-08-11 20:48:40.211427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71794 ] 00:07:29.778 [2024-08-11 20:48:40.349894] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.778 [2024-08-11 20:48:40.405850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.778 [2024-08-11 20:48:40.459503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.978  Copying: 511/511 [MB] (average 1110 MBps) 00:07:30.978 00:07:30.978 20:48:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:30.978 20:48:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:30.978 20:48:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:30.978 20:48:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.978 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:30.978 [2024-08-11 20:48:41.569431] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:30.978 [2024-08-11 20:48:41.569534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71814 ] 00:07:30.978 { 00:07:30.978 "subsystems": [ 00:07:30.978 { 00:07:30.978 "subsystem": "bdev", 00:07:30.978 "config": [ 00:07:30.978 { 00:07:30.978 "params": { 00:07:30.978 "block_size": 512, 00:07:30.978 "num_blocks": 1048576, 00:07:30.978 "name": "malloc0" 00:07:30.978 }, 00:07:30.978 "method": "bdev_malloc_create" 00:07:30.978 }, 00:07:30.978 { 00:07:30.978 "params": { 00:07:30.978 "filename": "/dev/zram1", 00:07:30.978 "name": "uring0" 00:07:30.978 }, 00:07:30.978 "method": "bdev_uring_create" 00:07:30.978 }, 00:07:30.978 { 00:07:30.978 "method": "bdev_wait_for_examine" 00:07:30.978 } 00:07:30.978 ] 00:07:30.978 } 00:07:30.978 ] 00:07:30.978 } 00:07:30.978 [2024-08-11 20:48:41.703464] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.237 [2024-08-11 20:48:41.759976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.237 [2024-08-11 20:48:41.811871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.806  Copying: 231/512 [MB] (231 MBps) Copying: 499/512 [MB] (268 MBps) Copying: 512/512 [MB] (average 250 MBps) 00:07:33.806 00:07:33.806 20:48:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:33.806 20:48:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:33.806 20:48:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:33.806 20:48:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:33.806 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:33.806 [2024-08-11 20:48:44.516069] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:33.806 [2024-08-11 20:48:44.516156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71854 ] 00:07:33.806 { 00:07:33.806 "subsystems": [ 00:07:33.806 { 00:07:33.806 "subsystem": "bdev", 00:07:33.806 "config": [ 00:07:33.806 { 00:07:33.806 "params": { 00:07:33.806 "block_size": 512, 00:07:33.806 "num_blocks": 1048576, 00:07:33.806 "name": "malloc0" 00:07:33.806 }, 00:07:33.806 "method": "bdev_malloc_create" 00:07:33.806 }, 00:07:33.806 { 00:07:33.806 "params": { 00:07:33.806 "filename": "/dev/zram1", 00:07:33.806 "name": "uring0" 00:07:33.806 }, 00:07:33.806 "method": "bdev_uring_create" 00:07:33.806 }, 00:07:33.806 { 00:07:33.806 "method": "bdev_wait_for_examine" 00:07:33.806 } 00:07:33.806 ] 00:07:33.806 } 00:07:33.806 ] 00:07:33.806 } 00:07:34.065 [2024-08-11 20:48:44.645599] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.065 [2024-08-11 20:48:44.707297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.065 [2024-08-11 20:48:44.763514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.596  Copying: 196/512 [MB] (196 MBps) Copying: 371/512 [MB] (174 MBps) Copying: 512/512 [MB] (average 182 MBps) 00:07:37.596 00:07:37.596 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:37.596 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 3uqy866m50kfj2crtx7qh9lsu8wk5ce1c9wb8p2rsfkvmac2lq212a66xrf9yqrnrhl9n311lceps1nwehxsa242nyk3609jkzwj7e5df9rbwtg6dnjeb8ms8897b226vgr1cutzlzrv6sc81bktt6x89x2j2mm4ocfygv9arn7dz7772wkoqp68rmoe1h6glquklfuqtffghg2p15bic1ho4s78q6gc2nux9f9h7sgklaqdrzxqaadri4ffys9y2012rn93swqyy7g3711kl63u63bdunhuy9filik4xl03a1roxrva0z49vg2rl68poe3dse2l3oth93iltwgj95ik7hko3bdtfw8n8o1jnevabxgmxk4et82kaslfosg2vgbbs98zph1is9u9iieaouz2f7z9hyh9ka31i2zx0ids7tyq3qqaqm0z8ydkx9osunyj6c9m3vi6tuny9pzvtdydaq3k7wnae7vm9ndcnpn5ahtn4ysbwrww1d0nept2nmaoehrumw2rwv0oh3d3slsxetciknj8tt6rxtnaz4k4yj9ib4lnczz5blbyffys639lne4oi5plf3rrxw2iitojfwgn8pe8h9sk6nj7xjqbbndxvnevrddqj8vd6kmb1eiff6ue4p70jhi2p84hx1ya4z5tmwkwcg0joaoal089o7ozu9rcu1ke7848cbcj8i9obve9tcih3gthw6pshdva4hey6cbfbn19ctrm102sq2ha2ezv8449qf4vwagp3uwhdye89tloegujvz0co74lfklwr5fqodz9v8wjar0asuzfmhryt9mgtppdl85desssxdal0q4im3np08tzvosl4jm4g2o57l2e1tzwrrmkw3xoh6mpjpa32dzv2y4s5cqyi9w52u3iz221sm8uxh9ayuycvj3gyfdqtyswkcy41283mg8ztquakddm9is4yvecy98tql07eotpfhvwmzpbu1xap4hvjusce28hbl7xhzcb5z7i4fphc5bf7247 == \3\u\q\y\8\6\6\m\5\0\k\f\j\2\c\r\t\x\7\q\h\9\l\s\u\8\w\k\5\c\e\1\c\9\w\b\8\p\2\r\s\f\k\v\m\a\c\2\l\q\2\1\2\a\6\6\x\r\f\9\y\q\r\n\r\h\l\9\n\3\1\1\l\c\e\p\s\1\n\w\e\h\x\s\a\2\4\2\n\y\k\3\6\0\9\j\k\z\w\j\7\e\5\d\f\9\r\b\w\t\g\6\d\n\j\e\b\8\m\s\8\8\9\7\b\2\2\6\v\g\r\1\c\u\t\z\l\z\r\v\6\s\c\8\1\b\k\t\t\6\x\8\9\x\2\j\2\m\m\4\o\c\f\y\g\v\9\a\r\n\7\d\z\7\7\7\2\w\k\o\q\p\6\8\r\m\o\e\1\h\6\g\l\q\u\k\l\f\u\q\t\f\f\g\h\g\2\p\1\5\b\i\c\1\h\o\4\s\7\8\q\6\g\c\2\n\u\x\9\f\9\h\7\s\g\k\l\a\q\d\r\z\x\q\a\a\d\r\i\4\f\f\y\s\9\y\2\0\1\2\r\n\9\3\s\w\q\y\y\7\g\3\7\1\1\k\l\6\3\u\6\3\b\d\u\n\h\u\y\9\f\i\l\i\k\4\x\l\0\3\a\1\r\o\x\r\v\a\0\z\4\9\v\g\2\r\l\6\8\p\o\e\3\d\s\e\2\l\3\o\t\h\9\3\i\l\t\w\g\j\9\5\i\k\7\h\k\o\3\b\d\t\f\w\8\n\8\o\1\j\n\e\v\a\b\x\g\m\x\k\4\e\t\8\2\k\a\s\l\f\o\s\g\2\v\g\b\b\s\9\8\z\p\h\1\i\s\9\u\9\i\i\e\a\o\u\z\2\f\7\z\9\h\y\h\9\k\a\3\1\i\2\z\x\0\i\d\s\7\t\y\q\3\q\q\a\q\m\0\z\8\y\d\k\x\9\o\s\u\n\y\j\6\c\9\m\3\v\i\6\t\u\n\y\9\p\z\v\t\d\y\d\a\q\3\k\7\w\n\a\e\7\v\m\9\n\d\c\n\p\n\5\a\h\t\n\4\y\s\b\w\r\w\w\1\d\0\n\e\p\t\2\n\m\a\o\e\h\r\u\m\w\2\r\w\v\0\o\h\3\d\3\s\l\s\x\e\t\c\i\k\n\j\8\t\t\6\r\x\t\n\a\z\4\k\4\y\j\9\i\b\4\l\n\c\z\z\5\b\l\b\y\f\f\y\s\6\3\9\l\n\e\4\o\i\5\p\l\f\3\r\r\x\w\2\i\i\t\o\j\f\w\g\n\8\p\e\8\h\9\s\k\6\n\j\7\x\j\q\b\b\n\d\x\v\n\e\v\r\d\d\q\j\8\v\d\6\k\m\b\1\e\i\f\f\6\u\e\4\p\7\0\j\h\i\2\p\8\4\h\x\1\y\a\4\z\5\t\m\w\k\w\c\g\0\j\o\a\o\a\l\0\8\9\o\7\o\z\u\9\r\c\u\1\k\e\7\8\4\8\c\b\c\j\8\i\9\o\b\v\e\9\t\c\i\h\3\g\t\h\w\6\p\s\h\d\v\a\4\h\e\y\6\c\b\f\b\n\1\9\c\t\r\m\1\0\2\s\q\2\h\a\2\e\z\v\8\4\4\9\q\f\4\v\w\a\g\p\3\u\w\h\d\y\e\8\9\t\l\o\e\g\u\j\v\z\0\c\o\7\4\l\f\k\l\w\r\5\f\q\o\d\z\9\v\8\w\j\a\r\0\a\s\u\z\f\m\h\r\y\t\9\m\g\t\p\p\d\l\8\5\d\e\s\s\s\x\d\a\l\0\q\4\i\m\3\n\p\0\8\t\z\v\o\s\l\4\j\m\4\g\2\o\5\7\l\2\e\1\t\z\w\r\r\m\k\w\3\x\o\h\6\m\p\j\p\a\3\2\d\z\v\2\y\4\s\5\c\q\y\i\9\w\5\2\u\3\i\z\2\2\1\s\m\8\u\x\h\9\a\y\u\y\c\v\j\3\g\y\f\d\q\t\y\s\w\k\c\y\4\1\2\8\3\m\g\8\z\t\q\u\a\k\d\d\m\9\i\s\4\y\v\e\c\y\9\8\t\q\l\0\7\e\o\t\p\f\h\v\w\m\z\p\b\u\1\x\a\p\4\h\v\j\u\s\c\e\2\8\h\b\l\7\x\h\z\c\b\5\z\7\i\4\f\p\h\c\5\b\f\7\2\4\7 ]] 00:07:37.596 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:37.596 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 3uqy866m50kfj2crtx7qh9lsu8wk5ce1c9wb8p2rsfkvmac2lq212a66xrf9yqrnrhl9n311lceps1nwehxsa242nyk3609jkzwj7e5df9rbwtg6dnjeb8ms8897b226vgr1cutzlzrv6sc81bktt6x89x2j2mm4ocfygv9arn7dz7772wkoqp68rmoe1h6glquklfuqtffghg2p15bic1ho4s78q6gc2nux9f9h7sgklaqdrzxqaadri4ffys9y2012rn93swqyy7g3711kl63u63bdunhuy9filik4xl03a1roxrva0z49vg2rl68poe3dse2l3oth93iltwgj95ik7hko3bdtfw8n8o1jnevabxgmxk4et82kaslfosg2vgbbs98zph1is9u9iieaouz2f7z9hyh9ka31i2zx0ids7tyq3qqaqm0z8ydkx9osunyj6c9m3vi6tuny9pzvtdydaq3k7wnae7vm9ndcnpn5ahtn4ysbwrww1d0nept2nmaoehrumw2rwv0oh3d3slsxetciknj8tt6rxtnaz4k4yj9ib4lnczz5blbyffys639lne4oi5plf3rrxw2iitojfwgn8pe8h9sk6nj7xjqbbndxvnevrddqj8vd6kmb1eiff6ue4p70jhi2p84hx1ya4z5tmwkwcg0joaoal089o7ozu9rcu1ke7848cbcj8i9obve9tcih3gthw6pshdva4hey6cbfbn19ctrm102sq2ha2ezv8449qf4vwagp3uwhdye89tloegujvz0co74lfklwr5fqodz9v8wjar0asuzfmhryt9mgtppdl85desssxdal0q4im3np08tzvosl4jm4g2o57l2e1tzwrrmkw3xoh6mpjpa32dzv2y4s5cqyi9w52u3iz221sm8uxh9ayuycvj3gyfdqtyswkcy41283mg8ztquakddm9is4yvecy98tql07eotpfhvwmzpbu1xap4hvjusce28hbl7xhzcb5z7i4fphc5bf7247 == \3\u\q\y\8\6\6\m\5\0\k\f\j\2\c\r\t\x\7\q\h\9\l\s\u\8\w\k\5\c\e\1\c\9\w\b\8\p\2\r\s\f\k\v\m\a\c\2\l\q\2\1\2\a\6\6\x\r\f\9\y\q\r\n\r\h\l\9\n\3\1\1\l\c\e\p\s\1\n\w\e\h\x\s\a\2\4\2\n\y\k\3\6\0\9\j\k\z\w\j\7\e\5\d\f\9\r\b\w\t\g\6\d\n\j\e\b\8\m\s\8\8\9\7\b\2\2\6\v\g\r\1\c\u\t\z\l\z\r\v\6\s\c\8\1\b\k\t\t\6\x\8\9\x\2\j\2\m\m\4\o\c\f\y\g\v\9\a\r\n\7\d\z\7\7\7\2\w\k\o\q\p\6\8\r\m\o\e\1\h\6\g\l\q\u\k\l\f\u\q\t\f\f\g\h\g\2\p\1\5\b\i\c\1\h\o\4\s\7\8\q\6\g\c\2\n\u\x\9\f\9\h\7\s\g\k\l\a\q\d\r\z\x\q\a\a\d\r\i\4\f\f\y\s\9\y\2\0\1\2\r\n\9\3\s\w\q\y\y\7\g\3\7\1\1\k\l\6\3\u\6\3\b\d\u\n\h\u\y\9\f\i\l\i\k\4\x\l\0\3\a\1\r\o\x\r\v\a\0\z\4\9\v\g\2\r\l\6\8\p\o\e\3\d\s\e\2\l\3\o\t\h\9\3\i\l\t\w\g\j\9\5\i\k\7\h\k\o\3\b\d\t\f\w\8\n\8\o\1\j\n\e\v\a\b\x\g\m\x\k\4\e\t\8\2\k\a\s\l\f\o\s\g\2\v\g\b\b\s\9\8\z\p\h\1\i\s\9\u\9\i\i\e\a\o\u\z\2\f\7\z\9\h\y\h\9\k\a\3\1\i\2\z\x\0\i\d\s\7\t\y\q\3\q\q\a\q\m\0\z\8\y\d\k\x\9\o\s\u\n\y\j\6\c\9\m\3\v\i\6\t\u\n\y\9\p\z\v\t\d\y\d\a\q\3\k\7\w\n\a\e\7\v\m\9\n\d\c\n\p\n\5\a\h\t\n\4\y\s\b\w\r\w\w\1\d\0\n\e\p\t\2\n\m\a\o\e\h\r\u\m\w\2\r\w\v\0\o\h\3\d\3\s\l\s\x\e\t\c\i\k\n\j\8\t\t\6\r\x\t\n\a\z\4\k\4\y\j\9\i\b\4\l\n\c\z\z\5\b\l\b\y\f\f\y\s\6\3\9\l\n\e\4\o\i\5\p\l\f\3\r\r\x\w\2\i\i\t\o\j\f\w\g\n\8\p\e\8\h\9\s\k\6\n\j\7\x\j\q\b\b\n\d\x\v\n\e\v\r\d\d\q\j\8\v\d\6\k\m\b\1\e\i\f\f\6\u\e\4\p\7\0\j\h\i\2\p\8\4\h\x\1\y\a\4\z\5\t\m\w\k\w\c\g\0\j\o\a\o\a\l\0\8\9\o\7\o\z\u\9\r\c\u\1\k\e\7\8\4\8\c\b\c\j\8\i\9\o\b\v\e\9\t\c\i\h\3\g\t\h\w\6\p\s\h\d\v\a\4\h\e\y\6\c\b\f\b\n\1\9\c\t\r\m\1\0\2\s\q\2\h\a\2\e\z\v\8\4\4\9\q\f\4\v\w\a\g\p\3\u\w\h\d\y\e\8\9\t\l\o\e\g\u\j\v\z\0\c\o\7\4\l\f\k\l\w\r\5\f\q\o\d\z\9\v\8\w\j\a\r\0\a\s\u\z\f\m\h\r\y\t\9\m\g\t\p\p\d\l\8\5\d\e\s\s\s\x\d\a\l\0\q\4\i\m\3\n\p\0\8\t\z\v\o\s\l\4\j\m\4\g\2\o\5\7\l\2\e\1\t\z\w\r\r\m\k\w\3\x\o\h\6\m\p\j\p\a\3\2\d\z\v\2\y\4\s\5\c\q\y\i\9\w\5\2\u\3\i\z\2\2\1\s\m\8\u\x\h\9\a\y\u\y\c\v\j\3\g\y\f\d\q\t\y\s\w\k\c\y\4\1\2\8\3\m\g\8\z\t\q\u\a\k\d\d\m\9\i\s\4\y\v\e\c\y\9\8\t\q\l\0\7\e\o\t\p\f\h\v\w\m\z\p\b\u\1\x\a\p\4\h\v\j\u\s\c\e\2\8\h\b\l\7\x\h\z\c\b\5\z\7\i\4\f\p\h\c\5\b\f\7\2\4\7 ]] 00:07:37.596 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:37.855 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:37.855 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:37.855 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:37.855 20:48:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.855 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:37.855 [2024-08-11 20:48:48.562614] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:37.855 [2024-08-11 20:48:48.562720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71916 ] 00:07:37.855 { 00:07:37.855 "subsystems": [ 00:07:37.855 { 00:07:37.855 "subsystem": "bdev", 00:07:37.855 "config": [ 00:07:37.855 { 00:07:37.855 "params": { 00:07:37.855 "block_size": 512, 00:07:37.855 "num_blocks": 1048576, 00:07:37.855 "name": "malloc0" 00:07:37.855 }, 00:07:37.855 "method": "bdev_malloc_create" 00:07:37.855 }, 00:07:37.855 { 00:07:37.855 "params": { 00:07:37.855 "filename": "/dev/zram1", 00:07:37.855 "name": "uring0" 00:07:37.855 }, 00:07:37.855 "method": "bdev_uring_create" 00:07:37.855 }, 00:07:37.855 { 00:07:37.855 "method": "bdev_wait_for_examine" 00:07:37.855 } 00:07:37.855 ] 00:07:37.855 } 00:07:37.855 ] 00:07:37.855 } 00:07:38.113 [2024-08-11 20:48:48.697902] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.113 [2024-08-11 20:48:48.756471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.113 [2024-08-11 20:48:48.810393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.615  Copying: 179/512 [MB] (179 MBps) Copying: 358/512 [MB] (178 MBps) Copying: 512/512 [MB] (average 180 MBps) 00:07:41.615 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:41.615 20:48:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.615 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:41.615 [2024-08-11 20:48:52.278761] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:41.615 [2024-08-11 20:48:52.278901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71972 ] 00:07:41.615 { 00:07:41.615 "subsystems": [ 00:07:41.615 { 00:07:41.615 "subsystem": "bdev", 00:07:41.615 "config": [ 00:07:41.615 { 00:07:41.615 "params": { 00:07:41.615 "block_size": 512, 00:07:41.615 "num_blocks": 1048576, 00:07:41.615 "name": "malloc0" 00:07:41.615 }, 00:07:41.615 "method": "bdev_malloc_create" 00:07:41.615 }, 00:07:41.615 { 00:07:41.615 "params": { 00:07:41.615 "filename": "/dev/zram1", 00:07:41.615 "name": "uring0" 00:07:41.615 }, 00:07:41.615 "method": "bdev_uring_create" 00:07:41.615 }, 00:07:41.615 { 00:07:41.615 "params": { 00:07:41.615 "name": "uring0" 00:07:41.615 }, 00:07:41.615 "method": "bdev_uring_delete" 00:07:41.615 }, 00:07:41.615 { 00:07:41.615 "method": "bdev_wait_for_examine" 00:07:41.615 } 00:07:41.615 ] 00:07:41.615 } 00:07:41.615 ] 00:07:41.615 } 00:07:41.874 [2024-08-11 20:48:52.415653] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.874 [2024-08-11 20:48:52.466517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.874 [2024-08-11 20:48:52.518909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.392  Copying: 0/0 [B] (average 0 Bps) 00:07:42.392 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # local es=0 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.392 20:48:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.650 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:42.650 [2024-08-11 20:48:53.192075] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:42.650 [2024-08-11 20:48:53.192180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72007 ] 00:07:42.650 { 00:07:42.650 "subsystems": [ 00:07:42.650 { 00:07:42.650 "subsystem": "bdev", 00:07:42.650 "config": [ 00:07:42.650 { 00:07:42.650 "params": { 00:07:42.650 "block_size": 512, 00:07:42.650 "num_blocks": 1048576, 00:07:42.650 "name": "malloc0" 00:07:42.650 }, 00:07:42.650 "method": "bdev_malloc_create" 00:07:42.650 }, 00:07:42.650 { 00:07:42.650 "params": { 00:07:42.650 "filename": "/dev/zram1", 00:07:42.650 "name": "uring0" 00:07:42.650 }, 00:07:42.650 "method": "bdev_uring_create" 00:07:42.650 }, 00:07:42.650 { 00:07:42.650 "params": { 00:07:42.650 "name": "uring0" 00:07:42.650 }, 00:07:42.650 "method": "bdev_uring_delete" 00:07:42.650 }, 00:07:42.650 { 00:07:42.650 "method": "bdev_wait_for_examine" 00:07:42.650 } 00:07:42.650 ] 00:07:42.650 } 00:07:42.650 ] 00:07:42.650 } 00:07:42.650 [2024-08-11 20:48:53.320392] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.650 [2024-08-11 20:48:53.376272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.908 [2024-08-11 20:48:53.430468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.908 [2024-08-11 20:48:53.623691] bdev.c:8234:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:42.908 [2024-08-11 20:48:53.623746] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:42.908 [2024-08-11 20:48:53.623771] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:42.908 [2024-08-11 20:48:53.623780] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.168 [2024-08-11 20:48:53.923114] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@649 -- # es=237 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@658 -- # es=109 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # case "$es" in 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@666 -- # es=1 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:43.426 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:43.685 00:07:43.685 real 0m14.115s 00:07:43.685 user 0m9.395s 00:07:43.685 sys 0m11.908s 00:07:43.685 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.685 ************************************ 00:07:43.685 END TEST dd_uring_copy 00:07:43.685 ************************************ 00:07:43.685 20:48:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:43.685 00:07:43.685 real 0m14.257s 00:07:43.685 user 0m9.455s 00:07:43.685 sys 0m11.989s 00:07:43.685 20:48:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.685 20:48:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:43.685 ************************************ 00:07:43.685 END TEST spdk_dd_uring 00:07:43.685 ************************************ 00:07:43.685 20:48:54 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:43.685 20:48:54 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.685 20:48:54 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.685 20:48:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:43.685 ************************************ 00:07:43.685 START TEST spdk_dd_sparse 00:07:43.685 ************************************ 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:43.685 * Looking for test storage... 00:07:43.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:43.685 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:43.686 1+0 records in 00:07:43.686 1+0 records out 00:07:43.686 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00484553 s, 866 MB/s 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:43.686 1+0 records in 00:07:43.686 1+0 records out 00:07:43.686 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0041238 s, 1.0 GB/s 00:07:43.686 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:43.945 1+0 records in 00:07:43.945 1+0 records out 00:07:43.945 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00906938 s, 462 MB/s 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:43.945 ************************************ 00:07:43.945 START TEST dd_sparse_file_to_file 00:07:43.945 ************************************ 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:43.945 20:48:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:43.945 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:43.945 [2024-08-11 20:48:54.532078] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:43.945 [2024-08-11 20:48:54.532172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72093 ] 00:07:43.945 { 00:07:43.945 "subsystems": [ 00:07:43.945 { 00:07:43.945 "subsystem": "bdev", 00:07:43.945 "config": [ 00:07:43.945 { 00:07:43.945 "params": { 00:07:43.945 "block_size": 4096, 00:07:43.945 "filename": "dd_sparse_aio_disk", 00:07:43.945 "name": "dd_aio" 00:07:43.945 }, 00:07:43.945 "method": "bdev_aio_create" 00:07:43.945 }, 00:07:43.945 { 00:07:43.945 "params": { 00:07:43.945 "lvs_name": "dd_lvstore", 00:07:43.945 "bdev_name": "dd_aio" 00:07:43.945 }, 00:07:43.945 "method": "bdev_lvol_create_lvstore" 00:07:43.945 }, 00:07:43.945 { 00:07:43.945 "method": "bdev_wait_for_examine" 00:07:43.945 } 00:07:43.945 ] 00:07:43.945 } 00:07:43.945 ] 00:07:43.945 } 00:07:43.945 [2024-08-11 20:48:54.669418] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.205 [2024-08-11 20:48:54.722567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.205 [2024-08-11 20:48:54.774428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.463  Copying: 12/36 [MB] (average 923 MBps) 00:07:44.463 00:07:44.463 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:44.463 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:44.463 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:44.463 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:44.463 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:44.464 00:07:44.464 real 0m0.618s 00:07:44.464 user 0m0.362s 00:07:44.464 sys 0m0.353s 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.464 ************************************ 00:07:44.464 END TEST dd_sparse_file_to_file 00:07:44.464 ************************************ 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:44.464 ************************************ 00:07:44.464 START TEST dd_sparse_file_to_bdev 00:07:44.464 ************************************ 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:44.464 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.464 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:44.464 [2024-08-11 20:48:55.202761] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:44.464 [2024-08-11 20:48:55.202853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72136 ] 00:07:44.464 { 00:07:44.464 "subsystems": [ 00:07:44.464 { 00:07:44.464 "subsystem": "bdev", 00:07:44.464 "config": [ 00:07:44.464 { 00:07:44.464 "params": { 00:07:44.464 "block_size": 4096, 00:07:44.464 "filename": "dd_sparse_aio_disk", 00:07:44.464 "name": "dd_aio" 00:07:44.464 }, 00:07:44.464 "method": "bdev_aio_create" 00:07:44.464 }, 00:07:44.464 { 00:07:44.464 "params": { 00:07:44.464 "lvs_name": "dd_lvstore", 00:07:44.464 "lvol_name": "dd_lvol", 00:07:44.464 "size_in_mib": 36, 00:07:44.464 "thin_provision": true 00:07:44.464 }, 00:07:44.464 "method": "bdev_lvol_create" 00:07:44.464 }, 00:07:44.464 { 00:07:44.464 "method": "bdev_wait_for_examine" 00:07:44.464 } 00:07:44.464 ] 00:07:44.464 } 00:07:44.464 ] 00:07:44.464 } 00:07:44.723 [2024-08-11 20:48:55.338740] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.723 [2024-08-11 20:48:55.395352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.723 [2024-08-11 20:48:55.449608] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.982  Copying: 12/36 [MB] (average 480 MBps) 00:07:44.982 00:07:44.982 00:07:44.982 real 0m0.606s 00:07:44.982 user 0m0.373s 00:07:44.982 sys 0m0.332s 00:07:44.982 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.982 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:44.982 ************************************ 00:07:44.982 END TEST dd_sparse_file_to_bdev 00:07:44.982 ************************************ 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:45.241 ************************************ 00:07:45.241 START TEST dd_sparse_bdev_to_file 00:07:45.241 ************************************ 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:45.241 20:48:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:45.241 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:45.241 [2024-08-11 20:48:55.861919] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:45.241 [2024-08-11 20:48:55.862006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72168 ] 00:07:45.241 { 00:07:45.241 "subsystems": [ 00:07:45.241 { 00:07:45.241 "subsystem": "bdev", 00:07:45.241 "config": [ 00:07:45.241 { 00:07:45.241 "params": { 00:07:45.241 "block_size": 4096, 00:07:45.241 "filename": "dd_sparse_aio_disk", 00:07:45.241 "name": "dd_aio" 00:07:45.241 }, 00:07:45.241 "method": "bdev_aio_create" 00:07:45.241 }, 00:07:45.241 { 00:07:45.241 "method": "bdev_wait_for_examine" 00:07:45.241 } 00:07:45.241 ] 00:07:45.241 } 00:07:45.241 ] 00:07:45.241 } 00:07:45.241 [2024-08-11 20:48:55.998279] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.500 [2024-08-11 20:48:56.061270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.500 [2024-08-11 20:48:56.114074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.759  Copying: 12/36 [MB] (average 923 MBps) 00:07:45.759 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:45.759 00:07:45.759 real 0m0.626s 00:07:45.759 user 0m0.385s 00:07:45.759 sys 0m0.342s 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.759 ************************************ 00:07:45.759 END TEST dd_sparse_bdev_to_file 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:45.759 ************************************ 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:45.759 00:07:45.759 real 0m2.164s 00:07:45.759 user 0m1.214s 00:07:45.759 sys 0m1.228s 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.759 20:48:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:45.759 ************************************ 00:07:45.759 END TEST spdk_dd_sparse 00:07:45.759 ************************************ 00:07:46.029 20:48:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:46.030 20:48:56 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.030 20:48:56 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.030 20:48:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.030 ************************************ 00:07:46.030 START TEST spdk_dd_negative 00:07:46.030 ************************************ 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:46.030 * Looking for test storage... 00:07:46.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.030 ************************************ 00:07:46.030 START TEST dd_invalid_arguments 00:07:46.030 ************************************ 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # local es=0 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.030 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:46.030 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:46.030 00:07:46.030 CPU options: 00:07:46.030 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:46.030 (like [0,1,10]) 00:07:46.030 --lcores lcore to CPU mapping list. The list is in the format: 00:07:46.030 [<,lcores[@CPUs]>...] 00:07:46.030 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:46.030 Within the group, '-' is used for range separator, 00:07:46.030 ',' is used for single number separator. 00:07:46.030 '( )' can be omitted for single element group, 00:07:46.030 '@' can be omitted if cpus and lcores have the same value 00:07:46.030 --disable-cpumask-locks Disable CPU core lock files. 00:07:46.030 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:46.030 pollers in the app support interrupt mode) 00:07:46.030 -p, --main-core main (primary) core for DPDK 00:07:46.030 00:07:46.030 Configuration options: 00:07:46.030 -c, --config, --json JSON config file 00:07:46.030 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:46.030 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:46.030 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:46.030 --rpcs-allowed comma-separated list of permitted RPCS 00:07:46.030 --json-ignore-init-errors don't exit on invalid config entry 00:07:46.030 00:07:46.030 Memory options: 00:07:46.030 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:46.030 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:46.030 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:46.030 -R, --huge-unlink unlink huge files after initialization 00:07:46.030 -n, --mem-channels number of memory channels used for DPDK 00:07:46.030 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:46.030 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:46.030 --no-huge run without using hugepages 00:07:46.030 -i, --shm-id shared memory ID (optional) 00:07:46.030 -g, --single-file-segments force creating just one hugetlbfs file 00:07:46.030 00:07:46.030 PCI options: 00:07:46.030 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:46.030 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:46.030 -u, --no-pci disable PCI access 00:07:46.030 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:46.030 00:07:46.030 Log options: 00:07:46.030 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:46.030 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:46.030 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:46.030 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:46.030 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:46.030 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:46.030 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:46.030 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:46.030 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:46.030 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:46.030 virtio_vfio_user, vmd) 00:07:46.030 --silence-noticelog disable notice level logging to stderr 00:07:46.030 00:07:46.030 Trace options: 00:07:46.030 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:46.030 setting 0 to disable trace (default 32768) 00:07:46.030 Tracepoints vary in size and can use more than one trace entry. 00:07:46.030 -e, --tpoint-group [:] 00:07:46.031 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:46.031 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:46.031 [2024-08-11 20:48:56.720172] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:46.031 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:46.031 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:46.031 a tracepoint group. First tpoint inside a group can be enabled by 00:07:46.031 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:46.031 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:46.031 in /include/spdk_internal/trace_defs.h 00:07:46.031 00:07:46.031 Other options: 00:07:46.031 -h, --help show this usage 00:07:46.031 -v, --version print SPDK version 00:07:46.031 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:46.031 --env-context Opaque context for use of the env implementation 00:07:46.031 00:07:46.031 Application specific: 00:07:46.031 [--------- DD Options ---------] 00:07:46.031 --if Input file. Must specify either --if or --ib. 00:07:46.031 --ib Input bdev. Must specifier either --if or --ib 00:07:46.031 --of Output file. Must specify either --of or --ob. 00:07:46.031 --ob Output bdev. Must specify either --of or --ob. 00:07:46.031 --iflag Input file flags. 00:07:46.031 --oflag Output file flags. 00:07:46.031 --bs I/O unit size (default: 4096) 00:07:46.031 --qd Queue depth (default: 2) 00:07:46.031 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:46.031 --skip Skip this many I/O units at start of input. (default: 0) 00:07:46.031 --seek Skip this many I/O units at start of output. (default: 0) 00:07:46.031 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:46.031 --sparse Enable hole skipping in input target 00:07:46.031 Available iflag and oflag values: 00:07:46.031 append - append mode 00:07:46.031 direct - use direct I/O for data 00:07:46.031 directory - fail unless a directory 00:07:46.031 dsync - use synchronized I/O for data 00:07:46.031 noatime - do not update access time 00:07:46.031 noctty - do not assign controlling terminal from file 00:07:46.031 nofollow - do not follow symlinks 00:07:46.031 nonblock - use non-blocking I/O 00:07:46.031 sync - use synchronized I/O for data and metadata 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@649 -- # es=2 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:46.031 00:07:46.031 real 0m0.074s 00:07:46.031 user 0m0.041s 00:07:46.031 sys 0m0.030s 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:46.031 ************************************ 00:07:46.031 END TEST dd_invalid_arguments 00:07:46.031 ************************************ 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.031 ************************************ 00:07:46.031 START TEST dd_double_input 00:07:46.031 ************************************ 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # local es=0 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.031 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.303 [2024-08-11 20:48:56.846986] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@649 -- # es=22 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:46.303 00:07:46.303 real 0m0.072s 00:07:46.303 user 0m0.042s 00:07:46.303 sys 0m0.027s 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 ************************************ 00:07:46.303 END TEST dd_double_input 00:07:46.303 ************************************ 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 ************************************ 00:07:46.303 START TEST dd_double_output 00:07:46.303 ************************************ 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # local es=0 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.303 [2024-08-11 20:48:56.973896] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@649 -- # es=22 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:46.303 00:07:46.303 real 0m0.073s 00:07:46.303 user 0m0.039s 00:07:46.303 sys 0m0.030s 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.303 20:48:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 ************************************ 00:07:46.303 END TEST dd_double_output 00:07:46.303 ************************************ 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.303 ************************************ 00:07:46.303 START TEST dd_no_input 00:07:46.303 ************************************ 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # local es=0 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.303 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:46.562 [2024-08-11 20:48:57.098024] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@649 -- # es=22 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:46.563 00:07:46.563 real 0m0.069s 00:07:46.563 user 0m0.044s 00:07:46.563 sys 0m0.022s 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:46.563 ************************************ 00:07:46.563 END TEST dd_no_input 00:07:46.563 ************************************ 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.563 ************************************ 00:07:46.563 START TEST dd_no_output 00:07:46.563 ************************************ 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # local es=0 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.563 [2024-08-11 20:48:57.209585] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@649 -- # es=22 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:46.563 00:07:46.563 real 0m0.057s 00:07:46.563 user 0m0.032s 00:07:46.563 sys 0m0.023s 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:46.563 ************************************ 00:07:46.563 END TEST dd_no_output 00:07:46.563 ************************************ 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.563 ************************************ 00:07:46.563 START TEST dd_wrong_blocksize 00:07:46.563 ************************************ 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # local es=0 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.563 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:46.563 [2024-08-11 20:48:57.330152] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@649 -- # es=22 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:46.822 00:07:46.822 real 0m0.074s 00:07:46.822 user 0m0.040s 00:07:46.822 sys 0m0.030s 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.822 ************************************ 00:07:46.822 END TEST dd_wrong_blocksize 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:46.822 ************************************ 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.822 ************************************ 00:07:46.822 START TEST dd_smaller_blocksize 00:07:46.822 ************************************ 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # local es=0 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.822 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:46.822 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:46.822 [2024-08-11 20:48:57.456585] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:46.822 [2024-08-11 20:48:57.456699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72392 ] 00:07:46.822 [2024-08-11 20:48:57.596950] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.081 [2024-08-11 20:48:57.669049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.081 [2024-08-11 20:48:57.726850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.081 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:47.081 [2024-08-11 20:48:57.758188] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:47.081 [2024-08-11 20:48:57.758223] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.339 [2024-08-11 20:48:57.871131] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@649 -- # es=244 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@658 -- # es=116 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # case "$es" in 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@666 -- # es=1 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:47.339 00:07:47.339 real 0m0.561s 00:07:47.339 user 0m0.297s 00:07:47.339 sys 0m0.157s 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.339 20:48:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:47.339 ************************************ 00:07:47.339 END TEST dd_smaller_blocksize 00:07:47.339 ************************************ 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.339 ************************************ 00:07:47.339 START TEST dd_invalid_count 00:07:47.339 ************************************ 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # local es=0 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:47.339 [2024-08-11 20:48:58.075039] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@649 -- # es=22 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:47.339 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:47.339 00:07:47.339 real 0m0.073s 00:07:47.340 user 0m0.040s 00:07:47.340 sys 0m0.029s 00:07:47.340 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.340 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:47.340 ************************************ 00:07:47.340 END TEST dd_invalid_count 00:07:47.340 ************************************ 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.598 ************************************ 00:07:47.598 START TEST dd_invalid_oflag 00:07:47.598 ************************************ 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # local es=0 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.598 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:47.599 [2024-08-11 20:48:58.198806] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@649 -- # es=22 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:47.599 00:07:47.599 real 0m0.070s 00:07:47.599 user 0m0.036s 00:07:47.599 sys 0m0.031s 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:47.599 ************************************ 00:07:47.599 END TEST dd_invalid_oflag 00:07:47.599 ************************************ 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.599 ************************************ 00:07:47.599 START TEST dd_invalid_iflag 00:07:47.599 ************************************ 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # local es=0 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:47.599 [2024-08-11 20:48:58.320296] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@649 -- # es=22 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:47.599 00:07:47.599 real 0m0.067s 00:07:47.599 user 0m0.042s 00:07:47.599 sys 0m0.024s 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.599 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:47.599 ************************************ 00:07:47.599 END TEST dd_invalid_iflag 00:07:47.599 ************************************ 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.858 ************************************ 00:07:47.858 START TEST dd_unknown_flag 00:07:47.858 ************************************ 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # local es=0 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.858 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:47.858 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:47.858 [2024-08-11 20:48:58.442236] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:47.858 [2024-08-11 20:48:58.442345] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72483 ] 00:07:47.858 [2024-08-11 20:48:58.578980] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.117 [2024-08-11 20:48:58.639802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.117 [2024-08-11 20:48:58.693453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.117 [2024-08-11 20:48:58.722855] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:48.117 [2024-08-11 20:48:58.722934] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.117 [2024-08-11 20:48:58.723033] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:48.117 [2024-08-11 20:48:58.723046] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.117 [2024-08-11 20:48:58.723334] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:48.117 [2024-08-11 20:48:58.723365] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.117 [2024-08-11 20:48:58.723415] app.c:1041:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:48.117 [2024-08-11 20:48:58.723426] app.c:1041:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:48.117 [2024-08-11 20:48:58.829164] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.375 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@649 -- # es=234 00:07:48.375 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:48.375 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@658 -- # es=106 00:07:48.375 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # case "$es" in 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@666 -- # es=1 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:48.376 00:07:48.376 real 0m0.529s 00:07:48.376 user 0m0.275s 00:07:48.376 sys 0m0.162s 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:48.376 ************************************ 00:07:48.376 END TEST dd_unknown_flag 00:07:48.376 ************************************ 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:48.376 ************************************ 00:07:48.376 START TEST dd_invalid_json 00:07:48.376 ************************************ 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # local es=0 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.376 20:48:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:48.376 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:48.376 [2024-08-11 20:48:59.025055] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:48.376 [2024-08-11 20:48:59.025162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72507 ] 00:07:48.635 [2024-08-11 20:48:59.162357] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.635 [2024-08-11 20:48:59.225691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.635 [2024-08-11 20:48:59.225802] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:48.635 [2024-08-11 20:48:59.225816] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:48.635 [2024-08-11 20:48:59.225825] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.635 [2024-08-11 20:48:59.225861] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@649 -- # es=234 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@658 -- # es=106 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # case "$es" in 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@666 -- # es=1 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:48.635 00:07:48.635 real 0m0.336s 00:07:48.635 user 0m0.156s 00:07:48.635 sys 0m0.079s 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:48.635 ************************************ 00:07:48.635 END TEST dd_invalid_json 00:07:48.635 ************************************ 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:48.635 ************************************ 00:07:48.635 START TEST dd_invalid_seek 00:07:48.635 ************************************ 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1121 -- # invalid_seek 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # local es=0 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.635 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:48.894 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:48.894 [2024-08-11 20:48:59.421939] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:48.894 [2024-08-11 20:48:59.422049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72537 ] 00:07:48.894 { 00:07:48.894 "subsystems": [ 00:07:48.894 { 00:07:48.894 "subsystem": "bdev", 00:07:48.894 "config": [ 00:07:48.894 { 00:07:48.894 "params": { 00:07:48.894 "block_size": 512, 00:07:48.894 "num_blocks": 512, 00:07:48.894 "name": "malloc0" 00:07:48.894 }, 00:07:48.894 "method": "bdev_malloc_create" 00:07:48.894 }, 00:07:48.894 { 00:07:48.894 "params": { 00:07:48.894 "block_size": 512, 00:07:48.894 "num_blocks": 512, 00:07:48.894 "name": "malloc1" 00:07:48.894 }, 00:07:48.894 "method": "bdev_malloc_create" 00:07:48.894 }, 00:07:48.894 { 00:07:48.894 "method": "bdev_wait_for_examine" 00:07:48.894 } 00:07:48.894 ] 00:07:48.894 } 00:07:48.894 ] 00:07:48.894 } 00:07:48.894 [2024-08-11 20:48:59.559177] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.894 [2024-08-11 20:48:59.614446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.894 [2024-08-11 20:48:59.668918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.152 [2024-08-11 20:48:59.722760] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:49.152 [2024-08-11 20:48:59.722811] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.152 [2024-08-11 20:48:59.829992] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.152 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@649 -- # es=228 00:07:49.152 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:49.152 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@658 -- # es=100 00:07:49.152 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@659 -- # case "$es" in 00:07:49.152 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@666 -- # es=1 00:07:49.152 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:49.152 00:07:49.152 real 0m0.552s 00:07:49.153 user 0m0.347s 00:07:49.153 sys 0m0.164s 00:07:49.153 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.153 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:49.153 ************************************ 00:07:49.153 END TEST dd_invalid_seek 00:07:49.153 ************************************ 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:49.411 ************************************ 00:07:49.411 START TEST dd_invalid_skip 00:07:49.411 ************************************ 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1121 -- # invalid_skip 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # local es=0 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:49.411 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.412 20:48:59 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:49.412 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:49.412 [2024-08-11 20:49:00.018921] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:49.412 [2024-08-11 20:49:00.019011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72570 ] 00:07:49.412 { 00:07:49.412 "subsystems": [ 00:07:49.412 { 00:07:49.412 "subsystem": "bdev", 00:07:49.412 "config": [ 00:07:49.412 { 00:07:49.412 "params": { 00:07:49.412 "block_size": 512, 00:07:49.412 "num_blocks": 512, 00:07:49.412 "name": "malloc0" 00:07:49.412 }, 00:07:49.412 "method": "bdev_malloc_create" 00:07:49.412 }, 00:07:49.412 { 00:07:49.412 "params": { 00:07:49.412 "block_size": 512, 00:07:49.412 "num_blocks": 512, 00:07:49.412 "name": "malloc1" 00:07:49.412 }, 00:07:49.412 "method": "bdev_malloc_create" 00:07:49.412 }, 00:07:49.412 { 00:07:49.412 "method": "bdev_wait_for_examine" 00:07:49.412 } 00:07:49.412 ] 00:07:49.412 } 00:07:49.412 ] 00:07:49.412 } 00:07:49.412 [2024-08-11 20:49:00.154904] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.670 [2024-08-11 20:49:00.217251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.670 [2024-08-11 20:49:00.271280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.670 [2024-08-11 20:49:00.325412] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:49.670 [2024-08-11 20:49:00.325466] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.670 [2024-08-11 20:49:00.433853] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@649 -- # es=228 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@658 -- # es=100 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@659 -- # case "$es" in 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@666 -- # es=1 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:49.929 00:07:49.929 real 0m0.565s 00:07:49.929 user 0m0.362s 00:07:49.929 sys 0m0.163s 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.929 ************************************ 00:07:49.929 END TEST dd_invalid_skip 00:07:49.929 ************************************ 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:49.929 ************************************ 00:07:49.929 START TEST dd_invalid_input_count 00:07:49.929 ************************************ 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1121 -- # invalid_input_count 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # local es=0 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.929 20:49:00 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:49.929 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:49.929 [2024-08-11 20:49:00.641402] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:49.929 [2024-08-11 20:49:00.641499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72609 ] 00:07:49.929 { 00:07:49.929 "subsystems": [ 00:07:49.929 { 00:07:49.929 "subsystem": "bdev", 00:07:49.929 "config": [ 00:07:49.929 { 00:07:49.929 "params": { 00:07:49.929 "block_size": 512, 00:07:49.929 "num_blocks": 512, 00:07:49.929 "name": "malloc0" 00:07:49.929 }, 00:07:49.929 "method": "bdev_malloc_create" 00:07:49.929 }, 00:07:49.929 { 00:07:49.929 "params": { 00:07:49.929 "block_size": 512, 00:07:49.929 "num_blocks": 512, 00:07:49.929 "name": "malloc1" 00:07:49.929 }, 00:07:49.929 "method": "bdev_malloc_create" 00:07:49.929 }, 00:07:49.929 { 00:07:49.929 "method": "bdev_wait_for_examine" 00:07:49.930 } 00:07:49.930 ] 00:07:49.930 } 00:07:49.930 ] 00:07:49.930 } 00:07:50.188 [2024-08-11 20:49:00.778189] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.188 [2024-08-11 20:49:00.831219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.188 [2024-08-11 20:49:00.884724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.188 [2024-08-11 20:49:00.938532] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:50.188 [2024-08-11 20:49:00.938625] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.447 [2024-08-11 20:49:01.046200] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@649 -- # es=228 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@658 -- # es=100 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@659 -- # case "$es" in 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@666 -- # es=1 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:50.447 00:07:50.447 real 0m0.547s 00:07:50.447 user 0m0.345s 00:07:50.447 sys 0m0.157s 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:50.447 ************************************ 00:07:50.447 END TEST dd_invalid_input_count 00:07:50.447 ************************************ 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.447 ************************************ 00:07:50.447 START TEST dd_invalid_output_count 00:07:50.447 ************************************ 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1121 -- # invalid_output_count 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # local es=0 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.447 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:50.706 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:50.706 [2024-08-11 20:49:01.235734] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:50.706 [2024-08-11 20:49:01.235832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72643 ] 00:07:50.706 { 00:07:50.706 "subsystems": [ 00:07:50.706 { 00:07:50.706 "subsystem": "bdev", 00:07:50.706 "config": [ 00:07:50.706 { 00:07:50.706 "params": { 00:07:50.706 "block_size": 512, 00:07:50.706 "num_blocks": 512, 00:07:50.706 "name": "malloc0" 00:07:50.706 }, 00:07:50.706 "method": "bdev_malloc_create" 00:07:50.706 }, 00:07:50.706 { 00:07:50.706 "method": "bdev_wait_for_examine" 00:07:50.706 } 00:07:50.706 ] 00:07:50.706 } 00:07:50.706 ] 00:07:50.706 } 00:07:50.706 [2024-08-11 20:49:01.374009] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.706 [2024-08-11 20:49:01.431577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.964 [2024-08-11 20:49:01.484771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.964 [2024-08-11 20:49:01.531112] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:50.964 [2024-08-11 20:49:01.531188] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.964 [2024-08-11 20:49:01.644831] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@649 -- # es=228 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@658 -- # es=100 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@659 -- # case "$es" in 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@666 -- # es=1 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:50.964 00:07:50.964 real 0m0.548s 00:07:50.964 user 0m0.344s 00:07:50.964 sys 0m0.156s 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.964 20:49:01 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:50.964 ************************************ 00:07:50.964 END TEST dd_invalid_output_count 00:07:50.964 ************************************ 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.223 ************************************ 00:07:51.223 START TEST dd_bs_not_multiple 00:07:51.223 ************************************ 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1121 -- # bs_not_multiple 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # local es=0 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.223 20:49:01 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:51.223 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:51.223 [2024-08-11 20:49:01.837939] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:51.223 [2024-08-11 20:49:01.838038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:07:51.223 { 00:07:51.223 "subsystems": [ 00:07:51.223 { 00:07:51.223 "subsystem": "bdev", 00:07:51.224 "config": [ 00:07:51.224 { 00:07:51.224 "params": { 00:07:51.224 "block_size": 512, 00:07:51.224 "num_blocks": 512, 00:07:51.224 "name": "malloc0" 00:07:51.224 }, 00:07:51.224 "method": "bdev_malloc_create" 00:07:51.224 }, 00:07:51.224 { 00:07:51.224 "params": { 00:07:51.224 "block_size": 512, 00:07:51.224 "num_blocks": 512, 00:07:51.224 "name": "malloc1" 00:07:51.224 }, 00:07:51.224 "method": "bdev_malloc_create" 00:07:51.224 }, 00:07:51.224 { 00:07:51.224 "method": "bdev_wait_for_examine" 00:07:51.224 } 00:07:51.224 ] 00:07:51.224 } 00:07:51.224 ] 00:07:51.224 } 00:07:51.224 [2024-08-11 20:49:01.967613] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.482 [2024-08-11 20:49:02.022437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.482 [2024-08-11 20:49:02.077956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.482 [2024-08-11 20:49:02.132830] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:51.482 [2024-08-11 20:49:02.132904] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.482 [2024-08-11 20:49:02.248765] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@649 -- # es=234 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@658 -- # es=106 00:07:51.740 ************************************ 00:07:51.740 END TEST dd_bs_not_multiple 00:07:51.740 ************************************ 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@659 -- # case "$es" in 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@666 -- # es=1 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:07:51.740 00:07:51.740 real 0m0.557s 00:07:51.740 user 0m0.344s 00:07:51.740 sys 0m0.172s 00:07:51.740 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.741 20:49:02 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:51.741 ************************************ 00:07:51.741 END TEST spdk_dd_negative 00:07:51.741 ************************************ 00:07:51.741 00:07:51.741 real 0m5.829s 00:07:51.741 user 0m3.106s 00:07:51.741 sys 0m2.098s 00:07:51.741 20:49:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.741 20:49:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.741 ************************************ 00:07:51.741 END TEST spdk_dd 00:07:51.741 ************************************ 00:07:51.741 00:07:51.741 real 1m15.924s 00:07:51.741 user 0m47.633s 00:07:51.741 sys 0m34.038s 00:07:51.741 20:49:02 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:51.741 20:49:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:51.741 20:49:02 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@269 -- # timing_exit lib 00:07:51.741 20:49:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.741 20:49:02 -- common/autotest_common.sh@10 -- # set +x 00:07:51.741 20:49:02 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@285 -- # '[' 1 -eq 1 ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@286 -- # export NET_TYPE 00:07:51.741 20:49:02 -- spdk/autotest.sh@289 -- # '[' tcp = rdma ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@292 -- # '[' tcp = tcp ']' 00:07:51.741 20:49:02 -- spdk/autotest.sh@293 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:51.741 20:49:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:51.741 20:49:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.741 20:49:02 -- common/autotest_common.sh@10 -- # set +x 00:07:51.741 ************************************ 00:07:51.741 START TEST nvmf_tcp 00:07:51.741 ************************************ 00:07:51.741 20:49:02 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:52.000 * Looking for test storage... 00:07:52.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:52.000 20:49:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:52.000 20:49:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:52.000 20:49:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:52.000 20:49:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:52.000 20:49:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.000 20:49:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.000 ************************************ 00:07:52.000 START TEST nvmf_target_core 00:07:52.000 ************************************ 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:52.000 * Looking for test storage... 00:07:52.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.000 20:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.000 ************************************ 00:07:52.000 START TEST nvmf_host_management 00:07:52.000 ************************************ 00:07:52.001 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.260 * Looking for test storage... 00:07:52.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # prepare_net_devs 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # local -g is_hw=no 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # remove_spdk_ns 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # nvmf_veth_init 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.260 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:52.261 Cannot find device "nvmf_init_br" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:07:52.261 Cannot find device "nvmf_init_br2" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:07:52.261 Cannot find device "nvmf_tgt_br" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.261 Cannot find device "nvmf_tgt_br2" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:07:52.261 Cannot find device "nvmf_init_br" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:07:52.261 Cannot find device "nvmf_init_br2" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:07:52.261 Cannot find device "nvmf_tgt_br" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:07:52.261 Cannot find device "nvmf_tgt_br2" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:07:52.261 Cannot find device "nvmf_br" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:07:52.261 Cannot find device "nvmf_init_if" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:07:52.261 Cannot find device "nvmf_init_if2" 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.261 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:52.261 20:49:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:07:52.261 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:52.261 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:07:52.519 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:52.520 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:07:52.778 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:52.778 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:07:52.778 00:07:52.778 --- 10.0.0.3 ping statistics --- 00:07:52.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.778 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:07:52.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:52.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:07:52.778 00:07:52.778 --- 10.0.0.4 ping statistics --- 00:07:52.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.778 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:52.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:52.778 00:07:52.778 --- 10.0.0.1 ping statistics --- 00:07:52.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.778 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:52.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:07:52.778 00:07:52.778 --- 10.0.0.2 ping statistics --- 00:07:52.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.778 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@453 -- # return 0 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # nvmfpid=72996 00:07:52.778 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # waitforlisten 72996 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 72996 ']' 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.779 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.779 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:52.779 [2024-08-11 20:49:03.391476] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:52.779 [2024-08-11 20:49:03.391557] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.779 [2024-08-11 20:49:03.532123] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.037 [2024-08-11 20:49:03.607380] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.037 [2024-08-11 20:49:03.607447] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.037 [2024-08-11 20:49:03.607462] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.037 [2024-08-11 20:49:03.607473] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.037 [2024-08-11 20:49:03.607482] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.037 [2024-08-11 20:49:03.607641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.037 [2024-08-11 20:49:03.607782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.037 [2024-08-11 20:49:03.608004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.037 [2024-08-11 20:49:03.608021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.037 [2024-08-11 20:49:03.668182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.037 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.037 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:07:53.037 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:07:53.037 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.037 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.038 [2024-08-11 20:49:03.782547] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:53.038 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.297 Malloc0 00:07:53.297 [2024-08-11 20:49:03.865079] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=73042 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 73042 /var/tmp/bdevperf.sock 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 73042 ']' 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@552 -- # config=() 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@552 -- # local subsystem config 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:07:53.297 { 00:07:53.297 "params": { 00:07:53.297 "name": "Nvme$subsystem", 00:07:53.297 "trtype": "$TEST_TRANSPORT", 00:07:53.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.297 "adrfam": "ipv4", 00:07:53.297 "trsvcid": "$NVMF_PORT", 00:07:53.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.297 "hdgst": ${hdgst:-false}, 00:07:53.297 "ddgst": ${ddgst:-false} 00:07:53.297 }, 00:07:53.297 "method": "bdev_nvme_attach_controller" 00:07:53.297 } 00:07:53.297 EOF 00:07:53.297 )") 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@574 -- # cat 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@576 -- # jq . 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@577 -- # IFS=, 00:07:53.297 20:49:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:07:53.297 "params": { 00:07:53.297 "name": "Nvme0", 00:07:53.297 "trtype": "tcp", 00:07:53.297 "traddr": "10.0.0.3", 00:07:53.297 "adrfam": "ipv4", 00:07:53.297 "trsvcid": "4420", 00:07:53.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:53.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:53.297 "hdgst": false, 00:07:53.297 "ddgst": false 00:07:53.297 }, 00:07:53.297 "method": "bdev_nvme_attach_controller" 00:07:53.297 }' 00:07:53.297 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:53.297 [2024-08-11 20:49:03.976113] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:53.297 [2024-08-11 20:49:03.976381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73042 ] 00:07:53.556 [2024-08-11 20:49:04.122184] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.556 [2024-08-11 20:49:04.213192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.556 [2024-08-11 20:49:04.288708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.815 Running I/O for 10 seconds... 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:53.815 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:54.073 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:54.337 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.337 [2024-08-11 20:49:04.859035] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with [2024-08-11 20:49:04.859116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.337 [2024-08-11 20:49:04.859156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.337 [2024-08-11 20:49:04.859186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.337 [2024-08-11 20:49:04.859195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.337 [2024-08-11 20:49:04.859205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.337 [2024-08-11 20:49:04.859214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.337 [2024-08-11 20:49:04.859223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:54.337 [2024-08-11 20:49:04.859232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.337 [2024-08-11 20:49:04.859240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x266a320 is same with the state(6) to be set 00:07:54.337 the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859689] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859839] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859857] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859866] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859874] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859883] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859891] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859899] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859908] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859916] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859924] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859932] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859940] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859982] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.859990] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860009] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860018] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860027] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860036] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860045] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860054] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860064] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860086] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860095] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860104] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860113] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860121] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860130] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860139] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860149] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860159] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860168] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860177] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860186] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860206] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860214] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860223] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860231] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860240] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860249] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860257] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860265] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860273] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860282] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860290] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860298] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860306] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860315] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860323] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860345] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860353] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860368] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860375] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860401] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860415] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860424] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860442] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860451] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860459] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860468] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860476] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860484] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2125280 is same with the state(6) to be set 00:07:54.337 [2024-08-11 20:49:04.860617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.337 [2024-08-11 20:49:04.860652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.337 [2024-08-11 20:49:04.860695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.860950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.860975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.338 [2024-08-11 20:49:04.861636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.338 [2024-08-11 20:49:04.861649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.861970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.861979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.862017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.862051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.862075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.862094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.862113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:54.339 [2024-08-11 20:49:04.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:54.339 [2024-08-11 20:49:04.862141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x268a0a0 is same with the state(6) to be set 00:07:54.339 [2024-08-11 20:49:04.862213] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x268a0a0 was disconnected and freed. reset controller. 00:07:54.339 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:54.339 00:07:54.339 Latency(us) 00:07:54.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.339 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:54.339 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:54.339 Verification LBA range: start 0x0 length 0x400 00:07:54.339 Nvme0n1 : 0.45 1269.91 79.37 141.10 0.00 43955.69 5510.98 43611.23 00:07:54.339 =================================================================================================================== 00:07:54.339 Total : 1269.91 79.37 141.10 0.00 43955.69 5510.98 43611.23 00:07:54.339 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:54.339 [2024-08-11 20:49:04.863422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:54.339 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:54.339 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@557 -- # xtrace_disable 00:07:54.339 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.339 [2024-08-11 20:49:04.865287] app.c:1054:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:54.339 [2024-08-11 20:49:04.865305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x266a320 (9): Bad file descriptor 00:07:54.339 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:07:54.339 20:49:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:54.339 [2024-08-11 20:49:04.877419] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 73042 00:07:55.286 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (73042) - No such process 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@552 -- # config=() 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@552 -- # local subsystem config 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:07:55.286 { 00:07:55.286 "params": { 00:07:55.286 "name": "Nvme$subsystem", 00:07:55.286 "trtype": "$TEST_TRANSPORT", 00:07:55.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.286 "adrfam": "ipv4", 00:07:55.286 "trsvcid": "$NVMF_PORT", 00:07:55.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.286 "hdgst": ${hdgst:-false}, 00:07:55.286 "ddgst": ${ddgst:-false} 00:07:55.286 }, 00:07:55.286 "method": "bdev_nvme_attach_controller" 00:07:55.286 } 00:07:55.286 EOF 00:07:55.286 )") 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@574 -- # cat 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@576 -- # jq . 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@577 -- # IFS=, 00:07:55.286 20:49:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:07:55.286 "params": { 00:07:55.286 "name": "Nvme0", 00:07:55.286 "trtype": "tcp", 00:07:55.286 "traddr": "10.0.0.3", 00:07:55.286 "adrfam": "ipv4", 00:07:55.286 "trsvcid": "4420", 00:07:55.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.286 "hdgst": false, 00:07:55.286 "ddgst": false 00:07:55.286 }, 00:07:55.286 "method": "bdev_nvme_attach_controller" 00:07:55.287 }' 00:07:55.287 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:55.287 [2024-08-11 20:49:05.934897] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:55.287 [2024-08-11 20:49:05.935016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73082 ] 00:07:55.545 [2024-08-11 20:49:06.071924] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.545 [2024-08-11 20:49:06.138259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.545 [2024-08-11 20:49:06.206429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.545 Running I/O for 1 seconds... 00:07:56.921 00:07:56.921 Latency(us) 00:07:56.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.921 Verification LBA range: start 0x0 length 0x400 00:07:56.921 Nvme0n1 : 1.04 1606.04 100.38 0.00 0.00 39106.73 3842.79 36938.47 00:07:56.921 =================================================================================================================== 00:07:56.921 Total : 1606.04 100.38 0.00 0.00 39106.73 3842.79 36938.47 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # nvmfcleanup 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.921 rmmod nvme_tcp 00:07:56.921 rmmod nvme_fabrics 00:07:56.921 rmmod nvme_keyring 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # '[' -n 72996 ']' 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # killprocess 72996 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 72996 ']' 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 72996 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:56.921 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72996 00:07:57.179 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:57.179 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:57.179 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72996' 00:07:57.179 killing process with pid 72996 00:07:57.179 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 72996 00:07:57.179 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 72996 00:07:57.179 [2024-08-11 20:49:07.935943] app.c: 712:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # iptr 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@783 -- # iptables-save 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@783 -- # iptables-restore 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:07:57.438 20:49:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # remove_spdk_ns 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # return 0 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:57.438 ************************************ 00:07:57.438 END TEST nvmf_host_management 00:07:57.438 ************************************ 00:07:57.438 00:07:57.438 real 0m5.471s 00:07:57.438 user 0m19.605s 00:07:57.438 sys 0m1.581s 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.438 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.698 ************************************ 00:07:57.698 START TEST nvmf_lvol 00:07:57.698 ************************************ 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:57.698 * Looking for test storage... 00:07:57.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # prepare_net_devs 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # local -g is_hw=no 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # remove_spdk_ns 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # nvmf_veth_init 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.698 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:07:57.699 Cannot find device "nvmf_init_br" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:07:57.699 Cannot find device "nvmf_init_br2" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:07:57.699 Cannot find device "nvmf_tgt_br" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.699 Cannot find device "nvmf_tgt_br2" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:07:57.699 Cannot find device "nvmf_init_br" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:07:57.699 Cannot find device "nvmf_init_br2" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:07:57.699 Cannot find device "nvmf_tgt_br" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:07:57.699 Cannot find device "nvmf_tgt_br2" 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:57.699 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:07:57.957 Cannot find device "nvmf_br" 00:07:57.957 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:57.957 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:07:57.957 Cannot find device "nvmf_init_if" 00:07:57.957 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:57.957 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:07:57.957 Cannot find device "nvmf_init_if2" 00:07:57.957 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.958 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:07:57.958 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:57.958 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:07:57.958 00:07:57.958 --- 10.0.0.3 ping statistics --- 00:07:57.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.958 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:07:57.958 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:57.958 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:07:57.958 00:07:57.958 --- 10.0.0.4 ping statistics --- 00:07:57.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.958 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:57.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:57.958 00:07:57.958 --- 10.0.0.1 ping statistics --- 00:07:57.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.958 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:57.958 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:58.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:07:58.216 00:07:58.216 --- 10.0.0.2 ping statistics --- 00:07:58.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.216 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@453 -- # return 0 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # nvmfpid=73337 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # waitforlisten 73337 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 73337 ']' 00:07:58.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.216 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.217 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.217 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.217 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.217 20:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.217 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:07:58.217 [2024-08-11 20:49:08.820550] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:07:58.217 [2024-08-11 20:49:08.820789] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.217 [2024-08-11 20:49:08.960114] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.475 [2024-08-11 20:49:09.034283] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.475 [2024-08-11 20:49:09.034361] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.475 [2024-08-11 20:49:09.034375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.475 [2024-08-11 20:49:09.034386] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.475 [2024-08-11 20:49:09.034395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.475 [2024-08-11 20:49:09.034557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.475 [2024-08-11 20:49:09.034793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.475 [2024-08-11 20:49:09.034810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.475 [2024-08-11 20:49:09.093902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.475 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.733 [2024-08-11 20:49:09.498570] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.992 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.250 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:59.250 20:49:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.508 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:59.508 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:59.766 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:00.024 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ae915b3a-aa34-4804-83f3-dedcd6edf7c6 00:08:00.024 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ae915b3a-aa34-4804-83f3-dedcd6edf7c6 lvol 20 00:08:00.282 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ceef7fe7-101a-4822-94bf-cdb36867ea70 00:08:00.282 20:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.540 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ceef7fe7-101a-4822-94bf-cdb36867ea70 00:08:00.798 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:01.056 [2024-08-11 20:49:11.814878] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:01.056 20:49:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:01.622 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73408 00:08:01.622 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:01.622 20:49:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:01.622 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:02.557 20:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot ceef7fe7-101a-4822-94bf-cdb36867ea70 MY_SNAPSHOT 00:08:02.815 20:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3f43c8a3-93b2-460b-b80e-a5530213ed08 00:08:02.815 20:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize ceef7fe7-101a-4822-94bf-cdb36867ea70 30 00:08:03.074 20:49:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 3f43c8a3-93b2-460b-b80e-a5530213ed08 MY_CLONE 00:08:03.332 20:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0bc3477b-9e39-43cd-95ce-4c647e3c03b7 00:08:03.332 20:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0bc3477b-9e39-43cd-95ce-4c647e3c03b7 00:08:03.941 20:49:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73408 00:08:12.057 Initializing NVMe Controllers 00:08:12.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:12.057 Controller IO queue size 128, less than required. 00:08:12.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:12.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:12.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:12.057 Initialization complete. Launching workers. 00:08:12.057 ======================================================== 00:08:12.057 Latency(us) 00:08:12.057 Device Information : IOPS MiB/s Average min max 00:08:12.057 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10817.28 42.25 11841.25 2468.23 64366.50 00:08:12.057 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10865.68 42.44 11787.12 469.02 84794.30 00:08:12.057 ======================================================== 00:08:12.057 Total : 21682.96 84.70 11814.13 469.02 84794.30 00:08:12.057 00:08:12.057 20:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:12.057 20:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ceef7fe7-101a-4822-94bf-cdb36867ea70 00:08:12.315 20:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae915b3a-aa34-4804-83f3-dedcd6edf7c6 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # nvmfcleanup 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.574 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.574 rmmod nvme_tcp 00:08:12.574 rmmod nvme_fabrics 00:08:12.832 rmmod nvme_keyring 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # '[' -n 73337 ']' 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # killprocess 73337 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 73337 ']' 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 73337 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73337 00:08:12.832 killing process with pid 73337 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73337' 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 73337 00:08:12.832 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 73337 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # iptr 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@783 -- # iptables-save 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@783 -- # iptables-restore 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:08:13.091 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # remove_spdk_ns 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # return 0 00:08:13.355 00:08:13.355 real 0m15.679s 00:08:13.355 user 1m4.838s 00:08:13.355 sys 0m4.251s 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.355 ************************************ 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.355 END TEST nvmf_lvol 00:08:13.355 ************************************ 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.355 ************************************ 00:08:13.355 START TEST nvmf_lvs_grow 00:08:13.355 ************************************ 00:08:13.355 20:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:13.355 * Looking for test storage... 00:08:13.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.355 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # prepare_net_devs 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # local -g is_hw=no 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # remove_spdk_ns 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # nvmf_veth_init 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:08:13.356 Cannot find device "nvmf_init_br" 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:13.356 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:08:13.619 Cannot find device "nvmf_init_br2" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:08:13.619 Cannot find device "nvmf_tgt_br" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:08:13.619 Cannot find device "nvmf_tgt_br2" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:08:13.619 Cannot find device "nvmf_init_br" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:08:13.619 Cannot find device "nvmf_init_br2" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:08:13.619 Cannot find device "nvmf_tgt_br" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:08:13.619 Cannot find device "nvmf_tgt_br2" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:08:13.619 Cannot find device "nvmf_br" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:08:13.619 Cannot find device "nvmf_init_if" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:08:13.619 Cannot find device "nvmf_init_if2" 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.619 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.619 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.620 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:08:13.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:13.879 00:08:13.879 --- 10.0.0.3 ping statistics --- 00:08:13.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.879 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:08:13.879 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:13.879 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:08:13.879 00:08:13.879 --- 10.0.0.4 ping statistics --- 00:08:13.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.879 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:13.879 00:08:13.879 --- 10.0.0.1 ping statistics --- 00:08:13.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.879 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:13.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:13.879 00:08:13.879 --- 10.0.0.2 ping statistics --- 00:08:13.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.879 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@453 -- # return 0 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # nvmfpid=73782 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # waitforlisten 73782 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 73782 ']' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:13.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:13.879 20:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.879 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:13.879 [2024-08-11 20:49:24.547424] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:13.879 [2024-08-11 20:49:24.547508] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.138 [2024-08-11 20:49:24.686664] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.138 [2024-08-11 20:49:24.797875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.138 [2024-08-11 20:49:24.797958] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.138 [2024-08-11 20:49:24.797971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.138 [2024-08-11 20:49:24.797980] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.138 [2024-08-11 20:49:24.797988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.138 [2024-08-11 20:49:24.798056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.138 [2024-08-11 20:49:24.882187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.073 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.331 [2024-08-11 20:49:25.856375] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.331 ************************************ 00:08:15.331 START TEST lvs_grow_clean 00:08:15.331 ************************************ 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.331 20:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.589 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:15.589 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:15.847 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:15.848 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:15.848 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:16.105 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:16.105 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:16.105 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3e74f6be-ce90-4f60-884b-9690a8817e2e lvol 150 00:08:16.364 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=316b7ad7-8cb1-4777-b67c-0795647d4740 00:08:16.364 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:16.364 20:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:16.621 [2024-08-11 20:49:27.204813] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:16.621 [2024-08-11 20:49:27.204883] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:16.621 true 00:08:16.621 20:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:16.622 20:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:16.880 20:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:16.880 20:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.138 20:49:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 316b7ad7-8cb1-4777-b67c-0795647d4740 00:08:17.397 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:17.655 [2024-08-11 20:49:28.281606] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:17.655 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73870 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73870 /var/tmp/bdevperf.sock 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 73870 ']' 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:17.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:17.913 20:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.913 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:17.913 [2024-08-11 20:49:28.560432] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:17.913 [2024-08-11 20:49:28.560536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73870 ] 00:08:18.171 [2024-08-11 20:49:28.701879] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.172 [2024-08-11 20:49:28.801274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.172 [2024-08-11 20:49:28.856492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.738 20:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.738 20:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:08:18.738 20:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.305 Nvme0n1 00:08:19.305 20:49:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:19.305 [ 00:08:19.305 { 00:08:19.305 "name": "Nvme0n1", 00:08:19.305 "aliases": [ 00:08:19.305 "316b7ad7-8cb1-4777-b67c-0795647d4740" 00:08:19.305 ], 00:08:19.305 "product_name": "NVMe disk", 00:08:19.305 "block_size": 4096, 00:08:19.305 "num_blocks": 38912, 00:08:19.305 "uuid": "316b7ad7-8cb1-4777-b67c-0795647d4740", 00:08:19.305 "assigned_rate_limits": { 00:08:19.305 "rw_ios_per_sec": 0, 00:08:19.305 "rw_mbytes_per_sec": 0, 00:08:19.305 "r_mbytes_per_sec": 0, 00:08:19.305 "w_mbytes_per_sec": 0 00:08:19.305 }, 00:08:19.305 "claimed": false, 00:08:19.305 "zoned": false, 00:08:19.305 "supported_io_types": { 00:08:19.305 "read": true, 00:08:19.305 "write": true, 00:08:19.305 "unmap": true, 00:08:19.305 "flush": true, 00:08:19.305 "reset": true, 00:08:19.305 "nvme_admin": true, 00:08:19.305 "nvme_io": true, 00:08:19.305 "nvme_io_md": false, 00:08:19.305 "write_zeroes": true, 00:08:19.305 "zcopy": false, 00:08:19.305 "get_zone_info": false, 00:08:19.305 "zone_management": false, 00:08:19.305 "zone_append": false, 00:08:19.305 "compare": true, 00:08:19.305 "compare_and_write": true, 00:08:19.305 "abort": true, 00:08:19.305 "seek_hole": false, 00:08:19.305 "seek_data": false, 00:08:19.305 "copy": true, 00:08:19.305 "nvme_iov_md": false 00:08:19.305 }, 00:08:19.305 "memory_domains": [ 00:08:19.305 { 00:08:19.305 "dma_device_id": "system", 00:08:19.305 "dma_device_type": 1 00:08:19.305 } 00:08:19.305 ], 00:08:19.305 "driver_specific": { 00:08:19.305 "nvme": [ 00:08:19.305 { 00:08:19.305 "trid": { 00:08:19.305 "trtype": "TCP", 00:08:19.305 "adrfam": "IPv4", 00:08:19.305 "traddr": "10.0.0.3", 00:08:19.305 "trsvcid": "4420", 00:08:19.305 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:19.305 }, 00:08:19.305 "ctrlr_data": { 00:08:19.305 "cntlid": 1, 00:08:19.305 "vendor_id": "0x8086", 00:08:19.305 "model_number": "SPDK bdev Controller", 00:08:19.305 "serial_number": "SPDK0", 00:08:19.305 "firmware_revision": "24.09", 00:08:19.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:19.305 "oacs": { 00:08:19.305 "security": 0, 00:08:19.305 "format": 0, 00:08:19.305 "firmware": 0, 00:08:19.305 "ns_manage": 0 00:08:19.305 }, 00:08:19.305 "multi_ctrlr": true, 00:08:19.305 "ana_reporting": false 00:08:19.305 }, 00:08:19.305 "vs": { 00:08:19.305 "nvme_version": "1.3" 00:08:19.305 }, 00:08:19.305 "ns_data": { 00:08:19.305 "id": 1, 00:08:19.305 "can_share": true 00:08:19.305 } 00:08:19.305 } 00:08:19.305 ], 00:08:19.305 "mp_policy": "active_passive" 00:08:19.305 } 00:08:19.305 } 00:08:19.305 ] 00:08:19.305 20:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73894 00:08:19.305 20:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:19.305 20:49:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:19.563 Running I/O for 10 seconds... 00:08:20.498 Latency(us) 00:08:20.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.498 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:20.498 =================================================================================================================== 00:08:20.498 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:20.498 00:08:21.459 20:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:21.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.459 Nvme0n1 : 2.00 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:08:21.459 =================================================================================================================== 00:08:21.459 Total : 7175.50 28.03 0.00 0.00 0.00 0.00 0.00 00:08:21.459 00:08:21.717 true 00:08:21.717 20:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:21.717 20:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.975 20:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:21.975 20:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:21.975 20:49:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73894 00:08:22.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.549 Nvme0n1 : 3.00 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:08:22.549 =================================================================================================================== 00:08:22.549 Total : 7154.33 27.95 0.00 0.00 0.00 0.00 0.00 00:08:22.549 00:08:23.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.484 Nvme0n1 : 4.00 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:08:23.484 =================================================================================================================== 00:08:23.484 Total : 7080.25 27.66 0.00 0.00 0.00 0.00 0.00 00:08:23.484 00:08:24.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.858 Nvme0n1 : 5.00 7035.80 27.48 0.00 0.00 0.00 0.00 0.00 00:08:24.858 =================================================================================================================== 00:08:24.858 Total : 7035.80 27.48 0.00 0.00 0.00 0.00 0.00 00:08:24.858 00:08:25.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.792 Nvme0n1 : 6.00 6963.83 27.20 0.00 0.00 0.00 0.00 0.00 00:08:25.792 =================================================================================================================== 00:08:25.792 Total : 6963.83 27.20 0.00 0.00 0.00 0.00 0.00 00:08:25.792 00:08:26.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.728 Nvme0n1 : 7.00 6948.71 27.14 0.00 0.00 0.00 0.00 0.00 00:08:26.728 =================================================================================================================== 00:08:26.728 Total : 6948.71 27.14 0.00 0.00 0.00 0.00 0.00 00:08:26.728 00:08:27.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.669 Nvme0n1 : 8.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:27.669 =================================================================================================================== 00:08:27.669 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:08:27.669 00:08:28.605 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.605 Nvme0n1 : 9.00 6914.44 27.01 0.00 0.00 0.00 0.00 0.00 00:08:28.605 =================================================================================================================== 00:08:28.605 Total : 6914.44 27.01 0.00 0.00 0.00 0.00 0.00 00:08:28.605 00:08:29.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.541 Nvme0n1 : 10.00 6910.70 26.99 0.00 0.00 0.00 0.00 0.00 00:08:29.541 =================================================================================================================== 00:08:29.541 Total : 6910.70 26.99 0.00 0.00 0.00 0.00 0.00 00:08:29.541 00:08:29.541 00:08:29.541 Latency(us) 00:08:29.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.541 Nvme0n1 : 10.01 6914.79 27.01 0.00 0.00 18506.03 4438.57 125829.12 00:08:29.541 =================================================================================================================== 00:08:29.541 Total : 6914.79 27.01 0.00 0.00 18506.03 4438.57 125829.12 00:08:29.541 0 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73870 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 73870 ']' 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 73870 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73870 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:29.541 killing process with pid 73870 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73870' 00:08:29.541 Received shutdown signal, test time was about 10.000000 seconds 00:08:29.541 00:08:29.541 Latency(us) 00:08:29.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.541 =================================================================================================================== 00:08:29.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 73870 00:08:29.541 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 73870 00:08:29.800 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:30.059 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.318 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:30.318 20:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:30.576 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:30.577 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:30.577 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:30.838 [2024-08-11 20:49:41.396169] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # local es=0 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:30.838 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:31.097 request: 00:08:31.097 { 00:08:31.097 "uuid": "3e74f6be-ce90-4f60-884b-9690a8817e2e", 00:08:31.097 "method": "bdev_lvol_get_lvstores", 00:08:31.097 "req_id": 1 00:08:31.097 } 00:08:31.097 Got JSON-RPC error response 00:08:31.097 response: 00:08:31.097 { 00:08:31.097 "code": -19, 00:08:31.097 "message": "No such device" 00:08:31.097 } 00:08:31.097 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # es=1 00:08:31.097 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:08:31.097 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:08:31.097 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:08:31.097 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.354 aio_bdev 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 316b7ad7-8cb1-4777-b67c-0795647d4740 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=316b7ad7-8cb1-4777-b67c-0795647d4740 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:31.355 20:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:31.355 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 316b7ad7-8cb1-4777-b67c-0795647d4740 -t 2000 00:08:31.613 [ 00:08:31.613 { 00:08:31.613 "name": "316b7ad7-8cb1-4777-b67c-0795647d4740", 00:08:31.613 "aliases": [ 00:08:31.613 "lvs/lvol" 00:08:31.613 ], 00:08:31.613 "product_name": "Logical Volume", 00:08:31.613 "block_size": 4096, 00:08:31.613 "num_blocks": 38912, 00:08:31.613 "uuid": "316b7ad7-8cb1-4777-b67c-0795647d4740", 00:08:31.613 "assigned_rate_limits": { 00:08:31.613 "rw_ios_per_sec": 0, 00:08:31.613 "rw_mbytes_per_sec": 0, 00:08:31.613 "r_mbytes_per_sec": 0, 00:08:31.613 "w_mbytes_per_sec": 0 00:08:31.613 }, 00:08:31.613 "claimed": false, 00:08:31.613 "zoned": false, 00:08:31.613 "supported_io_types": { 00:08:31.613 "read": true, 00:08:31.613 "write": true, 00:08:31.613 "unmap": true, 00:08:31.613 "flush": false, 00:08:31.613 "reset": true, 00:08:31.613 "nvme_admin": false, 00:08:31.613 "nvme_io": false, 00:08:31.613 "nvme_io_md": false, 00:08:31.613 "write_zeroes": true, 00:08:31.613 "zcopy": false, 00:08:31.613 "get_zone_info": false, 00:08:31.613 "zone_management": false, 00:08:31.613 "zone_append": false, 00:08:31.613 "compare": false, 00:08:31.613 "compare_and_write": false, 00:08:31.613 "abort": false, 00:08:31.613 "seek_hole": true, 00:08:31.613 "seek_data": true, 00:08:31.613 "copy": false, 00:08:31.613 "nvme_iov_md": false 00:08:31.613 }, 00:08:31.613 "driver_specific": { 00:08:31.613 "lvol": { 00:08:31.613 "lvol_store_uuid": "3e74f6be-ce90-4f60-884b-9690a8817e2e", 00:08:31.613 "base_bdev": "aio_bdev", 00:08:31.613 "thin_provision": false, 00:08:31.613 "num_allocated_clusters": 38, 00:08:31.613 "snapshot": false, 00:08:31.613 "clone": false, 00:08:31.613 "esnap_clone": false 00:08:31.613 } 00:08:31.613 } 00:08:31.613 } 00:08:31.613 ] 00:08:31.613 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:08:31.613 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:31.613 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.871 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.871 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:31.871 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:32.130 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:32.130 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 316b7ad7-8cb1-4777-b67c-0795647d4740 00:08:32.388 20:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e74f6be-ce90-4f60-884b-9690a8817e2e 00:08:32.656 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.918 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.176 ************************************ 00:08:33.176 END TEST lvs_grow_clean 00:08:33.176 ************************************ 00:08:33.176 00:08:33.176 real 0m17.887s 00:08:33.176 user 0m16.894s 00:08:33.176 sys 0m2.525s 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.176 ************************************ 00:08:33.176 START TEST lvs_grow_dirty 00:08:33.176 ************************************ 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.176 20:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.434 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:33.434 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:33.692 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:33.692 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:33.692 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.951 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.951 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.951 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb lvol 150 00:08:34.209 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:34.209 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:34.209 20:49:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:34.467 [2024-08-11 20:49:45.183454] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:34.467 [2024-08-11 20:49:45.183520] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:34.467 true 00:08:34.467 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:34.467 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:34.725 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:34.725 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.984 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:35.242 20:49:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:35.500 [2024-08-11 20:49:46.147944] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:35.500 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:35.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74140 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74140 /var/tmp/bdevperf.sock 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 74140 ']' 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:35.759 20:49:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.759 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:35.759 [2024-08-11 20:49:46.466608] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:35.759 [2024-08-11 20:49:46.466829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74140 ] 00:08:36.017 [2024-08-11 20:49:46.598554] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.017 [2024-08-11 20:49:46.657349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.017 [2024-08-11 20:49:46.709826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.660 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:36.660 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:08:36.660 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:36.944 Nvme0n1 00:08:36.944 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:37.203 [ 00:08:37.203 { 00:08:37.203 "name": "Nvme0n1", 00:08:37.203 "aliases": [ 00:08:37.203 "6fef8230-eade-4499-b2f7-ae15ea10ccf4" 00:08:37.203 ], 00:08:37.203 "product_name": "NVMe disk", 00:08:37.203 "block_size": 4096, 00:08:37.203 "num_blocks": 38912, 00:08:37.203 "uuid": "6fef8230-eade-4499-b2f7-ae15ea10ccf4", 00:08:37.203 "assigned_rate_limits": { 00:08:37.203 "rw_ios_per_sec": 0, 00:08:37.203 "rw_mbytes_per_sec": 0, 00:08:37.203 "r_mbytes_per_sec": 0, 00:08:37.203 "w_mbytes_per_sec": 0 00:08:37.203 }, 00:08:37.203 "claimed": false, 00:08:37.203 "zoned": false, 00:08:37.203 "supported_io_types": { 00:08:37.203 "read": true, 00:08:37.203 "write": true, 00:08:37.203 "unmap": true, 00:08:37.203 "flush": true, 00:08:37.203 "reset": true, 00:08:37.203 "nvme_admin": true, 00:08:37.203 "nvme_io": true, 00:08:37.203 "nvme_io_md": false, 00:08:37.203 "write_zeroes": true, 00:08:37.203 "zcopy": false, 00:08:37.203 "get_zone_info": false, 00:08:37.203 "zone_management": false, 00:08:37.203 "zone_append": false, 00:08:37.203 "compare": true, 00:08:37.203 "compare_and_write": true, 00:08:37.203 "abort": true, 00:08:37.203 "seek_hole": false, 00:08:37.203 "seek_data": false, 00:08:37.203 "copy": true, 00:08:37.203 "nvme_iov_md": false 00:08:37.203 }, 00:08:37.203 "memory_domains": [ 00:08:37.203 { 00:08:37.203 "dma_device_id": "system", 00:08:37.203 "dma_device_type": 1 00:08:37.203 } 00:08:37.203 ], 00:08:37.203 "driver_specific": { 00:08:37.203 "nvme": [ 00:08:37.203 { 00:08:37.203 "trid": { 00:08:37.203 "trtype": "TCP", 00:08:37.203 "adrfam": "IPv4", 00:08:37.203 "traddr": "10.0.0.3", 00:08:37.203 "trsvcid": "4420", 00:08:37.203 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:37.203 }, 00:08:37.203 "ctrlr_data": { 00:08:37.203 "cntlid": 1, 00:08:37.203 "vendor_id": "0x8086", 00:08:37.203 "model_number": "SPDK bdev Controller", 00:08:37.203 "serial_number": "SPDK0", 00:08:37.203 "firmware_revision": "24.09", 00:08:37.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:37.203 "oacs": { 00:08:37.203 "security": 0, 00:08:37.203 "format": 0, 00:08:37.203 "firmware": 0, 00:08:37.203 "ns_manage": 0 00:08:37.203 }, 00:08:37.203 "multi_ctrlr": true, 00:08:37.203 "ana_reporting": false 00:08:37.203 }, 00:08:37.203 "vs": { 00:08:37.203 "nvme_version": "1.3" 00:08:37.203 }, 00:08:37.203 "ns_data": { 00:08:37.203 "id": 1, 00:08:37.203 "can_share": true 00:08:37.203 } 00:08:37.203 } 00:08:37.203 ], 00:08:37.203 "mp_policy": "active_passive" 00:08:37.203 } 00:08:37.203 } 00:08:37.203 ] 00:08:37.203 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74158 00:08:37.203 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:37.203 20:49:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:37.203 Running I/O for 10 seconds... 00:08:38.581 Latency(us) 00:08:38.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.581 Nvme0n1 : 1.00 8128.00 31.75 0.00 0.00 0.00 0.00 0.00 00:08:38.581 =================================================================================================================== 00:08:38.581 Total : 8128.00 31.75 0.00 0.00 0.00 0.00 0.00 00:08:38.581 00:08:39.147 20:49:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:39.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.406 Nvme0n1 : 2.00 8128.00 31.75 0.00 0.00 0.00 0.00 0.00 00:08:39.406 =================================================================================================================== 00:08:39.406 Total : 8128.00 31.75 0.00 0.00 0.00 0.00 0.00 00:08:39.406 00:08:39.406 true 00:08:39.406 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:39.406 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:39.974 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:39.974 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:39.974 20:49:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74158 00:08:40.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.232 Nvme0n1 : 3.00 8043.33 31.42 0.00 0.00 0.00 0.00 0.00 00:08:40.232 =================================================================================================================== 00:08:40.232 Total : 8043.33 31.42 0.00 0.00 0.00 0.00 0.00 00:08:40.232 00:08:41.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.609 Nvme0n1 : 4.00 7937.50 31.01 0.00 0.00 0.00 0.00 0.00 00:08:41.609 =================================================================================================================== 00:08:41.609 Total : 7937.50 31.01 0.00 0.00 0.00 0.00 0.00 00:08:41.609 00:08:42.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.543 Nvme0n1 : 5.00 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:08:42.543 =================================================================================================================== 00:08:42.543 Total : 7874.00 30.76 0.00 0.00 0.00 0.00 0.00 00:08:42.544 00:08:43.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.477 Nvme0n1 : 6.00 7852.83 30.68 0.00 0.00 0.00 0.00 0.00 00:08:43.477 =================================================================================================================== 00:08:43.477 Total : 7852.83 30.68 0.00 0.00 0.00 0.00 0.00 00:08:43.477 00:08:44.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.411 Nvme0n1 : 7.00 7801.43 30.47 0.00 0.00 0.00 0.00 0.00 00:08:44.411 =================================================================================================================== 00:08:44.411 Total : 7801.43 30.47 0.00 0.00 0.00 0.00 0.00 00:08:44.411 00:08:45.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.345 Nvme0n1 : 8.00 7633.38 29.82 0.00 0.00 0.00 0.00 0.00 00:08:45.345 =================================================================================================================== 00:08:45.345 Total : 7633.38 29.82 0.00 0.00 0.00 0.00 0.00 00:08:45.345 00:08:46.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.280 Nvme0n1 : 9.00 7603.67 29.70 0.00 0.00 0.00 0.00 0.00 00:08:46.280 =================================================================================================================== 00:08:46.280 Total : 7603.67 29.70 0.00 0.00 0.00 0.00 0.00 00:08:46.280 00:08:47.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.216 Nvme0n1 : 10.00 7554.50 29.51 0.00 0.00 0.00 0.00 0.00 00:08:47.216 =================================================================================================================== 00:08:47.216 Total : 7554.50 29.51 0.00 0.00 0.00 0.00 0.00 00:08:47.216 00:08:47.216 00:08:47.216 Latency(us) 00:08:47.216 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.216 Nvme0n1 : 10.00 7563.63 29.55 0.00 0.00 16917.56 9770.82 182070.92 00:08:47.216 =================================================================================================================== 00:08:47.216 Total : 7563.63 29.55 0.00 0.00 16917.56 9770.82 182070.92 00:08:47.216 0 00:08:47.475 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74140 00:08:47.475 20:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 74140 ']' 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 74140 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74140 00:08:47.475 killing process with pid 74140 00:08:47.475 Received shutdown signal, test time was about 10.000000 seconds 00:08:47.475 00:08:47.475 Latency(us) 00:08:47.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.475 =================================================================================================================== 00:08:47.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74140' 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 74140 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 74140 00:08:47.475 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:48.042 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:48.042 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:48.042 20:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73782 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73782 00:08:48.301 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73782 Killed "${NVMF_APP[@]}" "$@" 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@501 -- # nvmfpid=74296 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@502 -- # waitforlisten 74296 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 74296 ']' 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:48.301 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:48.560 [2024-08-11 20:49:59.111858] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:48.560 [2024-08-11 20:49:59.111940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.560 [2024-08-11 20:49:59.247925] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.560 [2024-08-11 20:49:59.305723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.560 [2024-08-11 20:49:59.305781] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.560 [2024-08-11 20:49:59.305802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.560 [2024-08-11 20:49:59.305809] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.560 [2024-08-11 20:49:59.305816] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.560 [2024-08-11 20:49:59.305843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.819 [2024-08-11 20:49:59.356777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.819 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.078 [2024-08-11 20:49:59.663441] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:49.078 [2024-08-11 20:49:59.663921] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:49.078 [2024-08-11 20:49:59.664268] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:49.078 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:49.337 20:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fef8230-eade-4499-b2f7-ae15ea10ccf4 -t 2000 00:08:49.612 [ 00:08:49.612 { 00:08:49.612 "name": "6fef8230-eade-4499-b2f7-ae15ea10ccf4", 00:08:49.612 "aliases": [ 00:08:49.612 "lvs/lvol" 00:08:49.612 ], 00:08:49.612 "product_name": "Logical Volume", 00:08:49.612 "block_size": 4096, 00:08:49.612 "num_blocks": 38912, 00:08:49.612 "uuid": "6fef8230-eade-4499-b2f7-ae15ea10ccf4", 00:08:49.612 "assigned_rate_limits": { 00:08:49.612 "rw_ios_per_sec": 0, 00:08:49.612 "rw_mbytes_per_sec": 0, 00:08:49.612 "r_mbytes_per_sec": 0, 00:08:49.612 "w_mbytes_per_sec": 0 00:08:49.612 }, 00:08:49.612 "claimed": false, 00:08:49.612 "zoned": false, 00:08:49.612 "supported_io_types": { 00:08:49.612 "read": true, 00:08:49.612 "write": true, 00:08:49.612 "unmap": true, 00:08:49.612 "flush": false, 00:08:49.612 "reset": true, 00:08:49.612 "nvme_admin": false, 00:08:49.612 "nvme_io": false, 00:08:49.612 "nvme_io_md": false, 00:08:49.612 "write_zeroes": true, 00:08:49.612 "zcopy": false, 00:08:49.612 "get_zone_info": false, 00:08:49.612 "zone_management": false, 00:08:49.612 "zone_append": false, 00:08:49.612 "compare": false, 00:08:49.612 "compare_and_write": false, 00:08:49.612 "abort": false, 00:08:49.612 "seek_hole": true, 00:08:49.612 "seek_data": true, 00:08:49.612 "copy": false, 00:08:49.612 "nvme_iov_md": false 00:08:49.612 }, 00:08:49.612 "driver_specific": { 00:08:49.612 "lvol": { 00:08:49.612 "lvol_store_uuid": "dd5007a3-c8ac-449c-80da-2fba8c1cdabb", 00:08:49.613 "base_bdev": "aio_bdev", 00:08:49.613 "thin_provision": false, 00:08:49.613 "num_allocated_clusters": 38, 00:08:49.613 "snapshot": false, 00:08:49.613 "clone": false, 00:08:49.613 "esnap_clone": false 00:08:49.613 } 00:08:49.613 } 00:08:49.613 } 00:08:49.613 ] 00:08:49.613 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:08:49.613 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:49.613 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:49.909 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:49.909 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:49.909 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:50.177 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:50.177 20:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.436 [2024-08-11 20:50:01.033154] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # local es=0 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:50.436 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:50.695 request: 00:08:50.695 { 00:08:50.695 "uuid": "dd5007a3-c8ac-449c-80da-2fba8c1cdabb", 00:08:50.695 "method": "bdev_lvol_get_lvstores", 00:08:50.695 "req_id": 1 00:08:50.695 } 00:08:50.695 Got JSON-RPC error response 00:08:50.695 response: 00:08:50.695 { 00:08:50.695 "code": -19, 00:08:50.695 "message": "No such device" 00:08:50.695 } 00:08:50.695 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # es=1 00:08:50.695 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:08:50.695 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:08:50.695 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:08:50.695 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:50.953 aio_bdev 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:50.953 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:51.211 20:50:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6fef8230-eade-4499-b2f7-ae15ea10ccf4 -t 2000 00:08:51.470 [ 00:08:51.470 { 00:08:51.470 "name": "6fef8230-eade-4499-b2f7-ae15ea10ccf4", 00:08:51.470 "aliases": [ 00:08:51.470 "lvs/lvol" 00:08:51.470 ], 00:08:51.470 "product_name": "Logical Volume", 00:08:51.470 "block_size": 4096, 00:08:51.470 "num_blocks": 38912, 00:08:51.470 "uuid": "6fef8230-eade-4499-b2f7-ae15ea10ccf4", 00:08:51.470 "assigned_rate_limits": { 00:08:51.470 "rw_ios_per_sec": 0, 00:08:51.470 "rw_mbytes_per_sec": 0, 00:08:51.470 "r_mbytes_per_sec": 0, 00:08:51.470 "w_mbytes_per_sec": 0 00:08:51.470 }, 00:08:51.470 "claimed": false, 00:08:51.470 "zoned": false, 00:08:51.470 "supported_io_types": { 00:08:51.470 "read": true, 00:08:51.470 "write": true, 00:08:51.470 "unmap": true, 00:08:51.470 "flush": false, 00:08:51.470 "reset": true, 00:08:51.470 "nvme_admin": false, 00:08:51.470 "nvme_io": false, 00:08:51.470 "nvme_io_md": false, 00:08:51.470 "write_zeroes": true, 00:08:51.470 "zcopy": false, 00:08:51.470 "get_zone_info": false, 00:08:51.470 "zone_management": false, 00:08:51.470 "zone_append": false, 00:08:51.470 "compare": false, 00:08:51.470 "compare_and_write": false, 00:08:51.470 "abort": false, 00:08:51.470 "seek_hole": true, 00:08:51.470 "seek_data": true, 00:08:51.470 "copy": false, 00:08:51.470 "nvme_iov_md": false 00:08:51.470 }, 00:08:51.470 "driver_specific": { 00:08:51.470 "lvol": { 00:08:51.470 "lvol_store_uuid": "dd5007a3-c8ac-449c-80da-2fba8c1cdabb", 00:08:51.470 "base_bdev": "aio_bdev", 00:08:51.470 "thin_provision": false, 00:08:51.470 "num_allocated_clusters": 38, 00:08:51.470 "snapshot": false, 00:08:51.470 "clone": false, 00:08:51.470 "esnap_clone": false 00:08:51.470 } 00:08:51.470 } 00:08:51.470 } 00:08:51.470 ] 00:08:51.470 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:08:51.470 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:51.470 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:51.728 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:51.728 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:51.728 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:51.987 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:51.987 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6fef8230-eade-4499-b2f7-ae15ea10ccf4 00:08:52.245 20:50:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd5007a3-c8ac-449c-80da-2fba8c1cdabb 00:08:52.504 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:52.761 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.020 ************************************ 00:08:53.020 END TEST lvs_grow_dirty 00:08:53.020 ************************************ 00:08:53.020 00:08:53.020 real 0m19.880s 00:08:53.020 user 0m41.244s 00:08:53.020 sys 0m8.881s 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:53.020 nvmf_trace.0 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # nvmfcleanup 00:08:53.020 20:50:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.587 rmmod nvme_tcp 00:08:53.587 rmmod nvme_fabrics 00:08:53.587 rmmod nvme_keyring 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # '[' -n 74296 ']' 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # killprocess 74296 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 74296 ']' 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 74296 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74296 00:08:53.587 killing process with pid 74296 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74296' 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 74296 00:08:53.587 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 74296 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # iptr 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@783 -- # iptables-save 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@783 -- # iptables-restore 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:08:53.846 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # remove_spdk_ns 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # return 0 00:08:54.105 00:08:54.105 real 0m40.726s 00:08:54.105 user 1m4.198s 00:08:54.105 sys 0m12.428s 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:54.105 ************************************ 00:08:54.105 END TEST nvmf_lvs_grow 00:08:54.105 ************************************ 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.105 ************************************ 00:08:54.105 START TEST nvmf_bdev_io_wait 00:08:54.105 ************************************ 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:54.105 * Looking for test storage... 00:08:54.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.105 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # prepare_net_devs 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # local -g is_hw=no 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # remove_spdk_ns 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.106 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # nvmf_veth_init 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:08:54.365 Cannot find device "nvmf_init_br" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:08:54.365 Cannot find device "nvmf_init_br2" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:08:54.365 Cannot find device "nvmf_tgt_br" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.365 Cannot find device "nvmf_tgt_br2" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:08:54.365 Cannot find device "nvmf_init_br" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:08:54.365 Cannot find device "nvmf_init_br2" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:08:54.365 Cannot find device "nvmf_tgt_br" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:08:54.365 Cannot find device "nvmf_tgt_br2" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:08:54.365 Cannot find device "nvmf_br" 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:54.365 20:50:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:08:54.365 Cannot find device "nvmf_init_if" 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:08:54.365 Cannot find device "nvmf_init_if2" 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:08:54.365 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:08:54.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:54.624 00:08:54.624 --- 10.0.0.3 ping statistics --- 00:08:54.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.624 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:08:54.624 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:54.624 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:08:54.624 00:08:54.624 --- 10.0.0.4 ping statistics --- 00:08:54.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.624 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:54.624 00:08:54.624 --- 10.0.0.1 ping statistics --- 00:08:54.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.624 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:54.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:54.624 00:08:54.624 --- 10.0.0.2 ping statistics --- 00:08:54.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.624 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:54.624 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@453 -- # return 0 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # nvmfpid=74647 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # waitforlisten 74647 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 74647 ']' 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:54.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:54.625 20:50:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.625 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:54.625 [2024-08-11 20:50:05.361017] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:54.625 [2024-08-11 20:50:05.361119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.883 [2024-08-11 20:50:05.500213] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.883 [2024-08-11 20:50:05.588920] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.883 [2024-08-11 20:50:05.589018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.883 [2024-08-11 20:50:05.589044] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.883 [2024-08-11 20:50:05.589051] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.883 [2024-08-11 20:50:05.589058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.883 [2024-08-11 20:50:05.589911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.883 [2024-08-11 20:50:05.590015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.883 [2024-08-11 20:50:05.590133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.883 [2024-08-11 20:50:05.590136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:55.819 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 [2024-08-11 20:50:06.504524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 [2024-08-11 20:50:06.520723] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 Malloc0 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.820 [2024-08-11 20:50:06.586910] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74682 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74684 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # config=() 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # local subsystem config 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:08:55.820 { 00:08:55.820 "params": { 00:08:55.820 "name": "Nvme$subsystem", 00:08:55.820 "trtype": "$TEST_TRANSPORT", 00:08:55.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.820 "adrfam": "ipv4", 00:08:55.820 "trsvcid": "$NVMF_PORT", 00:08:55.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.820 "hdgst": ${hdgst:-false}, 00:08:55.820 "ddgst": ${ddgst:-false} 00:08:55.820 }, 00:08:55.820 "method": "bdev_nvme_attach_controller" 00:08:55.820 } 00:08:55.820 EOF 00:08:55.820 )") 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # config=() 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # local subsystem config 00:08:55.820 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:08:56.079 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:08:56.079 { 00:08:56.079 "params": { 00:08:56.079 "name": "Nvme$subsystem", 00:08:56.079 "trtype": "$TEST_TRANSPORT", 00:08:56.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.079 "adrfam": "ipv4", 00:08:56.079 "trsvcid": "$NVMF_PORT", 00:08:56.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.080 "hdgst": ${hdgst:-false}, 00:08:56.080 "ddgst": ${ddgst:-false} 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 } 00:08:56.080 EOF 00:08:56.080 )") 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # cat 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74687 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # cat 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74692 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # config=() 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # local subsystem config 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:08:56.080 { 00:08:56.080 "params": { 00:08:56.080 "name": "Nvme$subsystem", 00:08:56.080 "trtype": "$TEST_TRANSPORT", 00:08:56.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.080 "adrfam": "ipv4", 00:08:56.080 "trsvcid": "$NVMF_PORT", 00:08:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.080 "hdgst": ${hdgst:-false}, 00:08:56.080 "ddgst": ${ddgst:-false} 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 } 00:08:56.080 EOF 00:08:56.080 )") 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@576 -- # jq . 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@577 -- # IFS=, 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # cat 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:08:56.080 "params": { 00:08:56.080 "name": "Nvme1", 00:08:56.080 "trtype": "tcp", 00:08:56.080 "traddr": "10.0.0.3", 00:08:56.080 "adrfam": "ipv4", 00:08:56.080 "trsvcid": "4420", 00:08:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.080 "hdgst": false, 00:08:56.080 "ddgst": false 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 }' 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # config=() 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@552 -- # local subsystem config 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:08:56.080 { 00:08:56.080 "params": { 00:08:56.080 "name": "Nvme$subsystem", 00:08:56.080 "trtype": "$TEST_TRANSPORT", 00:08:56.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.080 "adrfam": "ipv4", 00:08:56.080 "trsvcid": "$NVMF_PORT", 00:08:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.080 "hdgst": ${hdgst:-false}, 00:08:56.080 "ddgst": ${ddgst:-false} 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 } 00:08:56.080 EOF 00:08:56.080 )") 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@574 -- # cat 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@576 -- # jq . 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@576 -- # jq . 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@576 -- # jq . 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@577 -- # IFS=, 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:08:56.080 "params": { 00:08:56.080 "name": "Nvme1", 00:08:56.080 "trtype": "tcp", 00:08:56.080 "traddr": "10.0.0.3", 00:08:56.080 "adrfam": "ipv4", 00:08:56.080 "trsvcid": "4420", 00:08:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.080 "hdgst": false, 00:08:56.080 "ddgst": false 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 }' 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@577 -- # IFS=, 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:08:56.080 "params": { 00:08:56.080 "name": "Nvme1", 00:08:56.080 "trtype": "tcp", 00:08:56.080 "traddr": "10.0.0.3", 00:08:56.080 "adrfam": "ipv4", 00:08:56.080 "trsvcid": "4420", 00:08:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.080 "hdgst": false, 00:08:56.080 "ddgst": false 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 }' 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@577 -- # IFS=, 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:08:56.080 "params": { 00:08:56.080 "name": "Nvme1", 00:08:56.080 "trtype": "tcp", 00:08:56.080 "traddr": "10.0.0.3", 00:08:56.080 "adrfam": "ipv4", 00:08:56.080 "trsvcid": "4420", 00:08:56.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.080 "hdgst": false, 00:08:56.080 "ddgst": false 00:08:56.080 }, 00:08:56.080 "method": "bdev_nvme_attach_controller" 00:08:56.080 }' 00:08:56.080 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:56.080 [2024-08-11 20:50:06.649730] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:56.080 [2024-08-11 20:50:06.649827] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:56.080 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:56.080 [2024-08-11 20:50:06.664426] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:56.080 [2024-08-11 20:50:06.664529] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:56.080 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:56.080 [2024-08-11 20:50:06.667243] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:56.080 [2024-08-11 20:50:06.667321] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:56.080 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:56.080 [2024-08-11 20:50:06.670942] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:56.080 [2024-08-11 20:50:06.671010] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:56.080 20:50:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74682 00:08:56.339 [2024-08-11 20:50:06.858528] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.339 [2024-08-11 20:50:06.935267] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.339 [2024-08-11 20:50:06.958291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:56.339 [2024-08-11 20:50:07.008204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:56.339 [2024-08-11 20:50:07.021789] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.339 [2024-08-11 20:50:07.029760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.339 [2024-08-11 20:50:07.065124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.339 [2024-08-11 20:50:07.092445] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.339 [2024-08-11 20:50:07.094307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:56.598 [2024-08-11 20:50:07.144216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.598 Running I/O for 1 seconds... 00:08:56.598 Running I/O for 1 seconds... 00:08:56.598 [2024-08-11 20:50:07.162131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.598 [2024-08-11 20:50:07.208876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.598 Running I/O for 1 seconds... 00:08:56.598 Running I/O for 1 seconds... 00:08:57.535 00:08:57.535 Latency(us) 00:08:57.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.535 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:57.535 Nvme1n1 : 1.00 182630.00 713.40 0.00 0.00 698.27 357.47 1169.22 00:08:57.535 =================================================================================================================== 00:08:57.535 Total : 182630.00 713.40 0.00 0.00 698.27 357.47 1169.22 00:08:57.535 00:08:57.535 Latency(us) 00:08:57.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.535 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:57.535 Nvme1n1 : 1.02 6029.49 23.55 0.00 0.00 20916.04 9353.77 33602.09 00:08:57.535 =================================================================================================================== 00:08:57.535 Total : 6029.49 23.55 0.00 0.00 20916.04 9353.77 33602.09 00:08:57.535 00:08:57.535 Latency(us) 00:08:57.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.535 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:57.535 Nvme1n1 : 1.01 6065.49 23.69 0.00 0.00 21037.89 5451.40 41466.41 00:08:57.535 =================================================================================================================== 00:08:57.535 Total : 6065.49 23.69 0.00 0.00 21037.89 5451.40 41466.41 00:08:57.793 00:08:57.793 Latency(us) 00:08:57.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.793 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:57.793 Nvme1n1 : 1.01 8999.75 35.16 0.00 0.00 14156.33 7983.48 26691.03 00:08:57.793 =================================================================================================================== 00:08:57.793 Total : 8999.75 35.16 0.00 0.00 14156.33 7983.48 26691.03 00:08:57.793 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74684 00:08:57.793 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74687 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74692 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@557 -- # xtrace_disable 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # nvmfcleanup 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.052 rmmod nvme_tcp 00:08:58.052 rmmod nvme_fabrics 00:08:58.052 rmmod nvme_keyring 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # '[' -n 74647 ']' 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # killprocess 74647 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 74647 ']' 00:08:58.052 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 74647 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74647 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74647' 00:08:58.053 killing process with pid 74647 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 74647 00:08:58.053 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 74647 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # iptr 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@783 -- # iptables-save 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@783 -- # iptables-restore 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:08:58.311 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.312 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:08:58.312 20:50:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:08:58.312 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:08:58.312 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:08:58.312 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:08:58.312 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # remove_spdk_ns 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # return 0 00:08:58.571 00:08:58.571 real 0m4.414s 00:08:58.571 user 0m18.333s 00:08:58.571 sys 0m2.331s 00:08:58.571 ************************************ 00:08:58.571 END TEST nvmf_bdev_io_wait 00:08:58.571 ************************************ 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.571 ************************************ 00:08:58.571 START TEST nvmf_queue_depth 00:08:58.571 ************************************ 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.571 * Looking for test storage... 00:08:58.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.571 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # prepare_net_devs 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # local -g is_hw=no 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # remove_spdk_ns 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.572 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # nvmf_veth_init 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:08:58.831 Cannot find device "nvmf_init_br" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:08:58.831 Cannot find device "nvmf_init_br2" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:08:58.831 Cannot find device "nvmf_tgt_br" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.831 Cannot find device "nvmf_tgt_br2" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:08:58.831 Cannot find device "nvmf_init_br" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:08:58.831 Cannot find device "nvmf_init_br2" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:08:58.831 Cannot find device "nvmf_tgt_br" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:08:58.831 Cannot find device "nvmf_tgt_br2" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:08:58.831 Cannot find device "nvmf_br" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:08:58.831 Cannot find device "nvmf_init_if" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:08:58.831 Cannot find device "nvmf_init_if2" 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.831 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:08:59.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:08:59.090 00:08:59.090 --- 10.0.0.3 ping statistics --- 00:08:59.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.090 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:59.090 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:08:59.090 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:59.090 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:08:59.090 00:08:59.090 --- 10.0.0.4 ping statistics --- 00:08:59.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.090 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:08:59.091 00:08:59.091 --- 10.0.0.1 ping statistics --- 00:08:59.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.091 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:59.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:59.091 00:08:59.091 --- 10.0.0.2 ping statistics --- 00:08:59.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.091 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@453 -- # return 0 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # nvmfpid=74967 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # waitforlisten 74967 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 74967 ']' 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.091 20:50:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.091 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:08:59.091 [2024-08-11 20:50:09.780780] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:08:59.091 [2024-08-11 20:50:09.780887] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.350 [2024-08-11 20:50:09.925485] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.350 [2024-08-11 20:50:10.038264] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.350 [2024-08-11 20:50:10.038332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.350 [2024-08-11 20:50:10.038346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.350 [2024-08-11 20:50:10.038356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.350 [2024-08-11 20:50:10.038365] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.350 [2024-08-11 20:50:10.038402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.350 [2024-08-11 20:50:10.117326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.917 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:59.917 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:08:59.917 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:08:59.917 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.917 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.176 [2024-08-11 20:50:10.736530] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:00.176 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.176 Malloc0 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.177 [2024-08-11 20:50:10.797715] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75005 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75005 /var/tmp/bdevperf.sock 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 75005 ']' 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:00.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:00.177 20:50:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.177 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:00.177 [2024-08-11 20:50:10.858213] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:00.177 [2024-08-11 20:50:10.858324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75005 ] 00:09:00.436 [2024-08-11 20:50:10.999281] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.436 [2024-08-11 20:50:11.068509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.436 [2024-08-11 20:50:11.127453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.436 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:00.436 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:09:00.436 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:00.436 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:00.436 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.694 NVMe0n1 00:09:00.694 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:00.694 20:50:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.694 Running I/O for 10 seconds... 00:09:12.899 00:09:12.899 Latency(us) 00:09:12.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.899 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:12.899 Verification LBA range: start 0x0 length 0x4000 00:09:12.899 NVMe0n1 : 10.06 9316.17 36.39 0.00 0.00 109433.48 16086.11 155379.90 00:09:12.899 =================================================================================================================== 00:09:12.899 Total : 9316.17 36.39 0.00 0.00 109433.48 16086.11 155379.90 00:09:12.899 0 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75005 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 75005 ']' 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 75005 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75005 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:12.899 killing process with pid 75005 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75005' 00:09:12.899 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 75005 00:09:12.899 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.899 00:09:12.900 Latency(us) 00:09:12.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.900 =================================================================================================================== 00:09:12.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 75005 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # nvmfcleanup 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:12.900 rmmod nvme_tcp 00:09:12.900 rmmod nvme_fabrics 00:09:12.900 rmmod nvme_keyring 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # '[' -n 74967 ']' 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # killprocess 74967 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 74967 ']' 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 74967 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74967 00:09:12.900 killing process with pid 74967 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74967' 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 74967 00:09:12.900 20:50:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 74967 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # iptr 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@783 -- # iptables-save 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@783 -- # iptables-restore 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # remove_spdk_ns 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # return 0 00:09:12.900 00:09:12.900 real 0m13.100s 00:09:12.900 user 0m21.796s 00:09:12.900 sys 0m2.376s 00:09:12.900 ************************************ 00:09:12.900 END TEST nvmf_queue_depth 00:09:12.900 ************************************ 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.900 ************************************ 00:09:12.900 START TEST nvmf_target_multipath 00:09:12.900 ************************************ 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:12.900 * Looking for test storage... 00:09:12.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # prepare_net_devs 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # local -g is_hw=no 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # remove_spdk_ns 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # nvmf_veth_init 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:12.900 Cannot find device "nvmf_init_br" 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:09:12.900 Cannot find device "nvmf_init_br2" 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:09:12.900 Cannot find device "nvmf_tgt_br" 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # true 00:09:12.900 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:09:12.900 Cannot find device "nvmf_tgt_br2" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:09:12.901 Cannot find device "nvmf_init_br" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:09:12.901 Cannot find device "nvmf_init_br2" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:09:12.901 Cannot find device "nvmf_tgt_br" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:09:12.901 Cannot find device "nvmf_tgt_br2" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:09:12.901 Cannot find device "nvmf_br" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:09:12.901 Cannot find device "nvmf_init_if" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:09:12.901 Cannot find device "nvmf_init_if2" 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:12.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:12.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:09:12.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:12.901 00:09:12.901 --- 10.0.0.3 ping statistics --- 00:09:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.901 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:09:12.901 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:12.901 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:09:12.901 00:09:12.901 --- 10.0.0.4 ping statistics --- 00:09:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.901 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:12.901 00:09:12.901 --- 10.0.0.1 ping statistics --- 00:09:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.901 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:12.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:12.901 00:09:12.901 --- 10.0.0.2 ping statistics --- 00:09:12.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.901 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@453 -- # return 0 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # nvmfpid=75356 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # waitforlisten 75356 00:09:12.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 75356 ']' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:12.901 20:50:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.901 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:12.901 [2024-08-11 20:50:23.003534] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:12.901 [2024-08-11 20:50:23.003788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.901 [2024-08-11 20:50:23.145951] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.901 [2024-08-11 20:50:23.215897] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.901 [2024-08-11 20:50:23.216236] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.901 [2024-08-11 20:50:23.216415] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.901 [2024-08-11 20:50:23.216432] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.901 [2024-08-11 20:50:23.216441] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.901 [2024-08-11 20:50:23.216623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.901 [2024-08-11 20:50:23.216770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.901 [2024-08-11 20:50:23.218042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.901 [2024-08-11 20:50:23.218049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.901 [2024-08-11 20:50:23.274879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.901 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.901 [2024-08-11 20:50:23.663809] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.160 20:50:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:13.418 Malloc0 00:09:13.419 20:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:13.677 20:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.936 20:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:14.195 [2024-08-11 20:50:24.826273] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:14.195 20:50:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:14.453 [2024-08-11 20:50:25.102505] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:14.453 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:14.712 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:14.712 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.712 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:09:14.712 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.712 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:14.712 20:50:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:17.246 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75444 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:17.247 20:50:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:17.247 [global] 00:09:17.247 thread=1 00:09:17.247 invalidate=1 00:09:17.247 rw=randrw 00:09:17.247 time_based=1 00:09:17.247 runtime=6 00:09:17.247 ioengine=libaio 00:09:17.247 direct=1 00:09:17.247 bs=4096 00:09:17.247 iodepth=128 00:09:17.247 norandommap=0 00:09:17.247 numjobs=1 00:09:17.247 00:09:17.247 verify_dump=1 00:09:17.247 verify_backlog=512 00:09:17.247 verify_state_save=0 00:09:17.247 do_verify=1 00:09:17.247 verify=crc32c-intel 00:09:17.247 [job0] 00:09:17.247 filename=/dev/nvme0n1 00:09:17.247 Could not set queue depth (nvme0n1) 00:09:17.247 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.247 fio-3.35 00:09:17.247 Starting 1 thread 00:09:17.815 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:18.074 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:18.333 20:50:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:18.592 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:18.851 20:50:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75444 00:09:23.063 00:09:23.063 job0: (groupid=0, jobs=1): err= 0: pid=75469: Sun Aug 11 20:50:33 2024 00:09:23.063 read: IOPS=10.9k, BW=42.7MiB/s (44.7MB/s)(256MiB/6002msec) 00:09:23.063 slat (usec): min=2, max=6133, avg=53.83, stdev=216.03 00:09:23.063 clat (usec): min=794, max=14978, avg=7951.66, stdev=1329.71 00:09:23.063 lat (usec): min=855, max=15004, avg=8005.49, stdev=1334.05 00:09:23.063 clat percentiles (usec): 00:09:23.063 | 1.00th=[ 4228], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7177], 00:09:23.063 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8029], 00:09:23.063 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[10814], 00:09:23.063 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13304], 99.95th=[13829], 00:09:23.063 | 99.99th=[14484] 00:09:23.063 bw ( KiB/s): min=12144, max=29736, per=52.36%, avg=22873.55, stdev=6487.64, samples=11 00:09:23.063 iops : min= 3036, max= 7434, avg=5718.36, stdev=1621.91, samples=11 00:09:23.063 write: IOPS=6574, BW=25.7MiB/s (26.9MB/s)(135MiB/5260msec); 0 zone resets 00:09:23.063 slat (usec): min=3, max=1833, avg=61.98, stdev=147.98 00:09:23.063 clat (usec): min=1289, max=14234, avg=6944.72, stdev=1182.43 00:09:23.063 lat (usec): min=1330, max=14256, avg=7006.71, stdev=1186.72 00:09:23.063 clat percentiles (usec): 00:09:23.063 | 1.00th=[ 3228], 5.00th=[ 4228], 10.00th=[ 5604], 20.00th=[ 6456], 00:09:23.063 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:09:23.063 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7963], 95.00th=[ 8225], 00:09:23.063 | 99.00th=[10421], 99.50th=[11076], 99.90th=[12518], 99.95th=[12780], 00:09:23.063 | 99.99th=[13304] 00:09:23.063 bw ( KiB/s): min=12336, max=29128, per=87.07%, avg=22897.27, stdev=6065.04, samples=11 00:09:23.063 iops : min= 3084, max= 7282, avg=5724.27, stdev=1516.25, samples=11 00:09:23.063 lat (usec) : 1000=0.01% 00:09:23.063 lat (msec) : 2=0.02%, 4=1.80%, 10=93.23%, 20=4.95% 00:09:23.063 cpu : usr=6.22%, sys=21.58%, ctx=5800, majf=0, minf=127 00:09:23.063 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:23.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.063 issued rwts: total=65549,34581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.063 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.063 00:09:23.063 Run status group 0 (all jobs): 00:09:23.063 READ: bw=42.7MiB/s (44.7MB/s), 42.7MiB/s-42.7MiB/s (44.7MB/s-44.7MB/s), io=256MiB (268MB), run=6002-6002msec 00:09:23.063 WRITE: bw=25.7MiB/s (26.9MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=135MiB (142MB), run=5260-5260msec 00:09:23.063 00:09:23.063 Disk stats (read/write): 00:09:23.063 nvme0n1: ios=64607/33976, merge=0/0, ticks=491013/220491, in_queue=711504, util=98.65% 00:09:23.063 20:50:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:23.322 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:23.580 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:23.580 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75547 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:23.581 20:50:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:23.581 [global] 00:09:23.581 thread=1 00:09:23.581 invalidate=1 00:09:23.581 rw=randrw 00:09:23.581 time_based=1 00:09:23.581 runtime=6 00:09:23.581 ioengine=libaio 00:09:23.581 direct=1 00:09:23.581 bs=4096 00:09:23.581 iodepth=128 00:09:23.581 norandommap=0 00:09:23.581 numjobs=1 00:09:23.581 00:09:23.581 verify_dump=1 00:09:23.581 verify_backlog=512 00:09:23.581 verify_state_save=0 00:09:23.581 do_verify=1 00:09:23.581 verify=crc32c-intel 00:09:23.581 [job0] 00:09:23.581 filename=/dev/nvme0n1 00:09:23.581 Could not set queue depth (nvme0n1) 00:09:23.839 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:23.839 fio-3.35 00:09:23.839 Starting 1 thread 00:09:24.775 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:25.034 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:25.293 20:50:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:25.551 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:25.809 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:25.810 20:50:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75547 00:09:29.997 00:09:29.997 job0: (groupid=0, jobs=1): err= 0: pid=75574: Sun Aug 11 20:50:40 2024 00:09:29.997 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(263MiB/6007msec) 00:09:29.997 slat (usec): min=2, max=6343, avg=43.57, stdev=204.78 00:09:29.997 clat (usec): min=328, max=26573, avg=7781.83, stdev=2597.26 00:09:29.997 lat (usec): min=346, max=26587, avg=7825.39, stdev=2611.01 00:09:29.997 clat percentiles (usec): 00:09:29.997 | 1.00th=[ 1614], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 5735], 00:09:29.997 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8291], 00:09:29.997 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[12125], 00:09:29.997 | 99.00th=[15795], 99.50th=[17957], 99.90th=[24511], 99.95th=[25297], 00:09:29.997 | 99.99th=[26346] 00:09:29.997 bw ( KiB/s): min=11704, max=40024, per=53.17%, avg=23824.00, stdev=8328.41, samples=12 00:09:29.997 iops : min= 2926, max=10006, avg=5956.00, stdev=2082.10, samples=12 00:09:29.997 write: IOPS=6640, BW=25.9MiB/s (27.2MB/s)(140MiB/5389msec); 0 zone resets 00:09:29.997 slat (usec): min=11, max=5587, avg=55.47, stdev=142.93 00:09:29.997 clat (usec): min=286, max=22558, avg=6685.02, stdev=2207.81 00:09:29.997 lat (usec): min=307, max=22574, avg=6740.49, stdev=2220.66 00:09:29.997 clat percentiles (usec): 00:09:29.997 | 1.00th=[ 2147], 5.00th=[ 3163], 10.00th=[ 3720], 20.00th=[ 4490], 00:09:29.997 | 30.00th=[ 5538], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7439], 00:09:29.997 | 70.00th=[ 7767], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9503], 00:09:29.997 | 99.00th=[13304], 99.50th=[15008], 99.90th=[16712], 99.95th=[17695], 00:09:29.997 | 99.99th=[21365] 00:09:29.997 bw ( KiB/s): min=12288, max=40608, per=89.67%, avg=23818.00, stdev=8159.50, samples=12 00:09:29.997 iops : min= 3072, max=10152, avg=5954.50, stdev=2039.87, samples=12 00:09:29.997 lat (usec) : 500=0.05%, 750=0.12%, 1000=0.18% 00:09:29.997 lat (msec) : 2=0.85%, 4=7.26%, 10=82.53%, 20=8.78%, 50=0.22% 00:09:29.997 cpu : usr=5.98%, sys=22.83%, ctx=6875, majf=0, minf=102 00:09:29.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:29.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.997 issued rwts: total=67285,35784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.997 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.997 00:09:29.997 Run status group 0 (all jobs): 00:09:29.997 READ: bw=43.8MiB/s (45.9MB/s), 43.8MiB/s-43.8MiB/s (45.9MB/s-45.9MB/s), io=263MiB (276MB), run=6007-6007msec 00:09:29.997 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=140MiB (147MB), run=5389-5389msec 00:09:29.997 00:09:29.997 Disk stats (read/write): 00:09:29.997 nvme0n1: ios=66507/35270, merge=0/0, ticks=483227/213907, in_queue=697134, util=98.62% 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:09:29.997 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.256 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:30.256 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:30.256 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:30.256 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:30.256 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # nvmfcleanup 00:09:30.256 20:50:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.515 rmmod nvme_tcp 00:09:30.515 rmmod nvme_fabrics 00:09:30.515 rmmod nvme_keyring 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # '[' -n 75356 ']' 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # killprocess 75356 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 75356 ']' 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 75356 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75356 00:09:30.515 killing process with pid 75356 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75356' 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 75356 00:09:30.515 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 75356 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # iptr 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@783 -- # iptables-save 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@783 -- # iptables-restore 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:09:30.774 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # remove_spdk_ns 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # return 0 00:09:31.034 00:09:31.034 real 0m19.239s 00:09:31.034 user 1m10.827s 00:09:31.034 sys 0m9.953s 00:09:31.034 ************************************ 00:09:31.034 END TEST nvmf_target_multipath 00:09:31.034 ************************************ 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.034 ************************************ 00:09:31.034 START TEST nvmf_zcopy 00:09:31.034 ************************************ 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:31.034 * Looking for test storage... 00:09:31.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # prepare_net_devs 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # local -g is_hw=no 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # remove_spdk_ns 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.034 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # nvmf_veth_init 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:31.293 Cannot find device "nvmf_init_br" 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:09:31.293 Cannot find device "nvmf_init_br2" 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:09:31.293 Cannot find device "nvmf_tgt_br" 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # true 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.293 Cannot find device "nvmf_tgt_br2" 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # true 00:09:31.293 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:09:31.293 Cannot find device "nvmf_init_br" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:09:31.294 Cannot find device "nvmf_init_br2" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:09:31.294 Cannot find device "nvmf_tgt_br" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:09:31.294 Cannot find device "nvmf_tgt_br2" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:09:31.294 Cannot find device "nvmf_br" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:09:31.294 Cannot find device "nvmf_init_if" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:09:31.294 Cannot find device "nvmf_init_if2" 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.294 20:50:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.294 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.294 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.294 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.294 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:09:31.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:31.553 00:09:31.553 --- 10.0.0.3 ping statistics --- 00:09:31.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.553 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:09:31.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.553 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:31.553 00:09:31.553 --- 10.0.0.4 ping statistics --- 00:09:31.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.553 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:31.553 00:09:31.553 --- 10.0.0.1 ping statistics --- 00:09:31.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.553 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:09:31.553 00:09:31.553 --- 10.0.0.2 ping statistics --- 00:09:31.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.553 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@453 -- # return 0 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # nvmfpid=75861 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # waitforlisten 75861 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 75861 ']' 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.553 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:31.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.554 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.554 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:31.554 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:31.554 [2024-08-11 20:50:42.311809] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:31.554 [2024-08-11 20:50:42.311903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.813 [2024-08-11 20:50:42.446379] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.813 [2024-08-11 20:50:42.504953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.813 [2024-08-11 20:50:42.505027] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.813 [2024-08-11 20:50:42.505038] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.813 [2024-08-11 20:50:42.505046] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.813 [2024-08-11 20:50:42.505053] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.813 [2024-08-11 20:50:42.505090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.813 [2024-08-11 20:50:42.561154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.072 [2024-08-11 20:50:42.669525] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.072 [2024-08-11 20:50:42.689763] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:32.072 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.073 malloc0 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@552 -- # config=() 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@552 -- # local subsystem config 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:09:32.073 { 00:09:32.073 "params": { 00:09:32.073 "name": "Nvme$subsystem", 00:09:32.073 "trtype": "$TEST_TRANSPORT", 00:09:32.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.073 "adrfam": "ipv4", 00:09:32.073 "trsvcid": "$NVMF_PORT", 00:09:32.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.073 "hdgst": ${hdgst:-false}, 00:09:32.073 "ddgst": ${ddgst:-false} 00:09:32.073 }, 00:09:32.073 "method": "bdev_nvme_attach_controller" 00:09:32.073 } 00:09:32.073 EOF 00:09:32.073 )") 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@574 -- # cat 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@576 -- # jq . 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@577 -- # IFS=, 00:09:32.073 20:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:09:32.073 "params": { 00:09:32.073 "name": "Nvme1", 00:09:32.073 "trtype": "tcp", 00:09:32.073 "traddr": "10.0.0.3", 00:09:32.073 "adrfam": "ipv4", 00:09:32.073 "trsvcid": "4420", 00:09:32.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.073 "hdgst": false, 00:09:32.073 "ddgst": false 00:09:32.073 }, 00:09:32.073 "method": "bdev_nvme_attach_controller" 00:09:32.073 }' 00:09:32.073 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:32.073 [2024-08-11 20:50:42.780977] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:32.073 [2024-08-11 20:50:42.781076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75892 ] 00:09:32.332 [2024-08-11 20:50:42.918724] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.332 [2024-08-11 20:50:42.983800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.332 [2024-08-11 20:50:43.048045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.591 Running I/O for 10 seconds... 00:09:42.568 00:09:42.568 Latency(us) 00:09:42.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.568 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.568 Verification LBA range: start 0x0 length 0x1000 00:09:42.568 Nvme1n1 : 10.01 6719.77 52.50 0.00 0.00 18988.28 1377.75 34793.66 00:09:42.568 =================================================================================================================== 00:09:42.568 Total : 6719.77 52.50 0.00 0.00 18988.28 1377.75 34793.66 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76009 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@552 -- # config=() 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@552 -- # local subsystem config 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:09:42.827 { 00:09:42.827 "params": { 00:09:42.827 "name": "Nvme$subsystem", 00:09:42.827 "trtype": "$TEST_TRANSPORT", 00:09:42.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.827 "adrfam": "ipv4", 00:09:42.827 "trsvcid": "$NVMF_PORT", 00:09:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.827 "hdgst": ${hdgst:-false}, 00:09:42.827 "ddgst": ${ddgst:-false} 00:09:42.827 }, 00:09:42.827 "method": "bdev_nvme_attach_controller" 00:09:42.827 } 00:09:42.827 EOF 00:09:42.827 )") 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@574 -- # cat 00:09:42.827 [2024-08-11 20:50:53.457537] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.827 [2024-08-11 20:50:53.457594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@576 -- # jq . 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@577 -- # IFS=, 00:09:42.827 20:50:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:09:42.827 "params": { 00:09:42.827 "name": "Nvme1", 00:09:42.827 "trtype": "tcp", 00:09:42.827 "traddr": "10.0.0.3", 00:09:42.827 "adrfam": "ipv4", 00:09:42.827 "trsvcid": "4420", 00:09:42.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.827 "hdgst": false, 00:09:42.827 "ddgst": false 00:09:42.827 }, 00:09:42.827 "method": "bdev_nvme_attach_controller" 00:09:42.827 }' 00:09:42.827 [2024-08-11 20:50:53.469500] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.827 [2024-08-11 20:50:53.469542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.827 [2024-08-11 20:50:53.481501] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.827 [2024-08-11 20:50:53.481542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.827 [2024-08-11 20:50:53.493503] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.827 [2024-08-11 20:50:53.493544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.827 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:42.827 [2024-08-11 20:50:53.505510] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.505534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.506157] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:42.828 [2024-08-11 20:50:53.506250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76009 ] 00:09:42.828 [2024-08-11 20:50:53.517505] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.517543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.529515] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.529557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.541511] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.541551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.553514] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.553554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.565516] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.565556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.573520] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.573558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.585521] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.585559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.828 [2024-08-11 20:50:53.597523] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.828 [2024-08-11 20:50:53.597563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.609534] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.609573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.621528] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.621567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.633529] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.633568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.640824] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.087 [2024-08-11 20:50:53.645529] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.645567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.657533] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.657572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.669539] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.669577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.681545] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.681586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.693546] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.693587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.705549] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.705588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.717552] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.717591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.722226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.087 [2024-08-11 20:50:53.729555] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.729594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.741557] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.741596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.753562] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.753601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.765565] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.765603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.777568] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.777639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.789570] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.789644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.801572] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.801637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.805904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.087 [2024-08-11 20:50:53.813590] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.813680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.825582] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.825665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.837586] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.837650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.849591] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.849646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.087 [2024-08-11 20:50:53.861665] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.087 [2024-08-11 20:50:53.861756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.873672] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.873739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.885740] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.885784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.897750] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.897795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.909764] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.909810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.921771] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.921818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 Running I/O for 5 seconds... 00:09:43.346 [2024-08-11 20:50:53.933781] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.933824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.950320] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.950367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.964914] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.964961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.975935] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.975981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:53.991734] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:53.991782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:54.008366] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:54.008412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:54.025929] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:54.025977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-08-11 20:50:54.041264] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-08-11 20:50:54.041319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.347 [2024-08-11 20:50:54.059116] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.347 [2024-08-11 20:50:54.059178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.347 [2024-08-11 20:50:54.074948] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.347 [2024-08-11 20:50:54.075022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.347 [2024-08-11 20:50:54.091332] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.347 [2024-08-11 20:50:54.091396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.347 [2024-08-11 20:50:54.107885] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.347 [2024-08-11 20:50:54.107943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.124572] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.124677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.142118] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.142178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.159568] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.159632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.175287] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.175343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.193503] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.193562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.207220] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.207277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.223002] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.223060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.239336] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.239398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.257378] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.257432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.272855] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.272904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.284275] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.284322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.299396] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.299454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.317065] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.317124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.333134] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.333188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.351468] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.351529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-08-11 20:50:54.367358] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-08-11 20:50:54.367412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.864 [2024-08-11 20:50:54.384474] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.864 [2024-08-11 20:50:54.384535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.864 [2024-08-11 20:50:54.400854] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.864 [2024-08-11 20:50:54.400913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.419420] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.419493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.434117] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.434171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.452402] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.452461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.467046] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.467093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.476484] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.476529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.493071] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.493118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.511168] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.511233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.526909] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.526956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.544830] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.544876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.561562] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.561645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.578327] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.578383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.595021] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.595083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.611764] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.611813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.865 [2024-08-11 20:50:54.628549] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.865 [2024-08-11 20:50:54.628595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.645107] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.645153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.662411] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.662458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.679910] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.679957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.697056] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.697102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.713621] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.713648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.730027] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.730089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.746960] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.747008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.763694] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.763739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.780239] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.780286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.797442] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.797488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.814050] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.814097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.831101] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.831149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.846699] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.846730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.864345] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.864391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.881109] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.881155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.123 [2024-08-11 20:50:54.897222] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.123 [2024-08-11 20:50:54.897269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:54.914336] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:54.914382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:54.931407] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:54.931463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:54.948904] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:54.948952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:54.964876] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:54.964925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:54.982851] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:54.982903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:54.997414] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:54.997463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:55.012728] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:55.012775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:55.030968] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:55.031030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:55.046151] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.381 [2024-08-11 20:50:55.046204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.381 [2024-08-11 20:50:55.062040] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.382 [2024-08-11 20:50:55.062094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.382 [2024-08-11 20:50:55.079244] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.382 [2024-08-11 20:50:55.079299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.382 [2024-08-11 20:50:55.095337] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.382 [2024-08-11 20:50:55.095387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.382 [2024-08-11 20:50:55.113666] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.382 [2024-08-11 20:50:55.113733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.382 [2024-08-11 20:50:55.129729] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.382 [2024-08-11 20:50:55.129776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.382 [2024-08-11 20:50:55.148131] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.382 [2024-08-11 20:50:55.148182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.163235] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.163283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.174633] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.174682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.190928] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.190983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.206812] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.206866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.223897] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.223952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.241201] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.241252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.257033] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.257083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.274192] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.274246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.291279] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.291334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.307724] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.307774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.323376] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.323425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.340999] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.341061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.356442] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.356490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.367811] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.367858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.383669] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.383716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.640 [2024-08-11 20:50:55.399859] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.640 [2024-08-11 20:50:55.399907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.418066] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.418113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.431924] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.431972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.447380] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.447428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.464738] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.464784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.479518] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.479565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.495305] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.495353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.512351] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.512397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.528587] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.528684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.545049] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.545085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.564035] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.564083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.578454] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.578501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.594717] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.594764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.610875] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.610920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.627785] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.627832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.644105] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.644152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.659448] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.659496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.668775] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.668821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.913 [2024-08-11 20:50:55.684365] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.913 [2024-08-11 20:50:55.684423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.699993] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.700040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.718232] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.718280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.733886] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.733934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.751251] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.751298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.766988] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.767037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.785122] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.785170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.799985] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.800034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.811510] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.811557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.827472] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.827520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.843288] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.843310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.859391] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.172 [2024-08-11 20:50:55.859439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.172 [2024-08-11 20:50:55.876488] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.173 [2024-08-11 20:50:55.876536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.173 [2024-08-11 20:50:55.892845] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.173 [2024-08-11 20:50:55.892894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.173 [2024-08-11 20:50:55.911307] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.173 [2024-08-11 20:50:55.911354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.173 [2024-08-11 20:50:55.925192] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.173 [2024-08-11 20:50:55.925239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.173 [2024-08-11 20:50:55.940714] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.173 [2024-08-11 20:50:55.940761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:55.957091] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:55.957139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:55.974238] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:55.974295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:55.990479] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:55.990526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.007765] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.007812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.023665] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.023712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.040834] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.040880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.056951] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.056996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.074738] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.074785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.088485] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.088543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.103301] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.103369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.119960] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.120029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.135433] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.135480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.146590] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.146671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.163005] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.163054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.178521] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.178581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.431 [2024-08-11 20:50:56.196687] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.431 [2024-08-11 20:50:56.196734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.211807] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.211854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.228484] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.228532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.243345] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.243405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.259662] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.259708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.275134] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.275196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.284170] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.284217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.299619] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.299664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.314899] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.314946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.325732] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.325777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.341583] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.341669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.358836] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.358881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.374830] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.374877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.392336] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.392384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.407693] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.407740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.419082] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.419130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.734 [2024-08-11 20:50:56.435397] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.734 [2024-08-11 20:50:56.435444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.735 [2024-08-11 20:50:56.451807] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.735 [2024-08-11 20:50:56.451854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.735 [2024-08-11 20:50:56.468705] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.735 [2024-08-11 20:50:56.468752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.484035] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.484083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.493332] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.493377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.509383] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.509431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.527748] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.527796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.542940] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.542988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.552136] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.552184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.567880] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.567935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.583979] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.584035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.600270] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.600328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.617598] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.617682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.634091] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.634148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.650926] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.011 [2024-08-11 20:50:56.651008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.011 [2024-08-11 20:50:56.667914] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.667984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.684776] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.684834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.702085] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.702142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.717815] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.717874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.735541] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.735602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.751723] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.751779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.767850] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.767906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.012 [2024-08-11 20:50:56.786052] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.012 [2024-08-11 20:50:56.786112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.800470] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.800523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.816400] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.816448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.833578] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.833665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.849965] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.850003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.867055] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.867112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.883937] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.884015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.902391] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.902448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.917255] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.917310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.933518] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.933568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.951422] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.951483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.966587] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.966673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.981299] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.981353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:56.997116] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:56.997171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:57.013301] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:57.013354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:57.031088] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:57.031138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.270 [2024-08-11 20:50:57.045912] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.270 [2024-08-11 20:50:57.045960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.061260] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.061322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.079234] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.079285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.093946] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.094003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.110643] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.110701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.126199] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.126246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.137441] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.137489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.153547] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.153594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.168640] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.168691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.180025] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.180071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.196081] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.196127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.212434] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.212480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.229225] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.229261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.245572] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.245658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.262734] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.262769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.279895] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.528 [2024-08-11 20:50:57.279930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.528 [2024-08-11 20:50:57.296327] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.529 [2024-08-11 20:50:57.296367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.313477] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.313513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.330330] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.330548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.347668] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.347702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.363723] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.363758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.374384] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.374419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.390046] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.390085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.406724] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.406760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.423434] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.423470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.440417] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.440638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.457058] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.457095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.474844] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.474885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.489133] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.489169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.504483] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.504736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.521316] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.521353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.538907] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.538945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.788 [2024-08-11 20:50:57.556210] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.788 [2024-08-11 20:50:57.556424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.572665] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.572703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.590348] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.590394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.606166] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.606208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.624148] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.624187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.639044] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.639281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.655096] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.655134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.672709] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.672747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.689375] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.689417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.707735] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.707774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.721886] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.721928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.738334] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.738376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.754820] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.755105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.771496] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.771533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.789311] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.789354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.804337] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.804378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.047 [2024-08-11 20:50:57.820944] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.047 [2024-08-11 20:50:57.821242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.837298] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.837338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.854298] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.854335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.870217] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.870257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.886859] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.887067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.903174] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.903226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.920812] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.920845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.936348] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.936381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.947270] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.947306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.963540] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.963576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.980164] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.980200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:57.995739] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:57.995774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:58.012845] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:58.012882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:58.029948] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:58.030183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:58.045509] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:58.045758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:58.056395] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:58.056431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.306 [2024-08-11 20:50:58.072430] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.306 [2024-08-11 20:50:58.072466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.088577] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.088652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.105936] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.105974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.122894] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.122930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.140038] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.140073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.157157] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.157349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.173577] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.173659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.191327] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.191362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.208240] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.208430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.224102] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.224138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.234934] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.235133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.251021] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.251057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.267856] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.267893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.283138] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.283173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.300831] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.300890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.317501] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.317537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.565 [2024-08-11 20:50:58.334292] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.565 [2024-08-11 20:50:58.334491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.350756] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.350792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.367803] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.367839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.385095] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.385287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.401472] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.401507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.417782] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.417818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.435088] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.435124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.451741] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.451776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.468161] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.468197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.486015] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.486089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.502058] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.502094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.519466] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.519703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.535695] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.535731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.552753] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.552788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.569580] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.569650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.824 [2024-08-11 20:50:58.586715] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.824 [2024-08-11 20:50:58.586751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.603735] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.603781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.620685] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.620720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.637379] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.637415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.653661] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.653696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.671078] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.671113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.687862] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.687898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.705028] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.705063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.721967] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.722179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.740078] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.740112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.755321] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.755358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.771236] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.771274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.787385] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.787423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.804807] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.804843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.820337] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.820547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.836213] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.083 [2024-08-11 20:50:58.836250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.083 [2024-08-11 20:50:58.853568] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.084 [2024-08-11 20:50:58.853632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.868980] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.869033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.886821] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.887035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.902442] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.902643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.920619] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.920666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.936886] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.936924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 00:09:48.343 Latency(us) 00:09:48.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.343 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:48.343 Nvme1n1 : 5.01 13273.74 103.70 0.00 0.00 9630.76 3872.58 17873.45 00:09:48.343 =================================================================================================================== 00:09:48.343 Total : 13273.74 103.70 0.00 0.00 9630.76 3872.58 17873.45 00:09:48.343 [2024-08-11 20:50:58.948343] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.948378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.960334] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.960367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.972349] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.972649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.984357] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.984393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:58.996355] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:58.996392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.008361] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.008649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.020370] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.020662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.032373] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.032410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.044367] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.044407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.056381] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.056710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.068399] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.068632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.080375] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.080580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.092394] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.092661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.104398] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.104651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.343 [2024-08-11 20:50:59.116384] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.343 [2024-08-11 20:50:59.116581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 [2024-08-11 20:50:59.128402] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-08-11 20:50:59.128655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 [2024-08-11 20:50:59.140396] subsystem.c:2072:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.602 [2024-08-11 20:50:59.140575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.602 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76009) - No such process 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76009 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.602 delay0 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.602 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:48.603 20:50:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:48.603 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:48.603 [2024-08-11 20:50:59.347691] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:55.168 Initializing NVMe Controllers 00:09:55.168 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:55.168 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:55.168 Initialization complete. Launching workers. 00:09:55.168 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:09:55.168 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 378, failed to submit 33 00:09:55.168 success 250, unsuccessful 128, failed 0 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # nvmfcleanup 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:55.168 rmmod nvme_tcp 00:09:55.168 rmmod nvme_fabrics 00:09:55.168 rmmod nvme_keyring 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # '[' -n 75861 ']' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # killprocess 75861 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 75861 ']' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 75861 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75861 00:09:55.168 killing process with pid 75861 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75861' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 75861 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 75861 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # iptr 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@783 -- # iptables-save 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@783 -- # iptables-restore 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:09:55.168 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:09:55.427 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:09:55.427 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.427 20:51:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # remove_spdk_ns 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # return 0 00:09:55.427 00:09:55.427 real 0m24.366s 00:09:55.427 user 0m39.378s 00:09:55.427 sys 0m7.182s 00:09:55.427 ************************************ 00:09:55.427 END TEST nvmf_zcopy 00:09:55.427 ************************************ 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.427 ************************************ 00:09:55.427 START TEST nvmf_nmic 00:09:55.427 ************************************ 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:55.427 * Looking for test storage... 00:09:55.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.427 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.687 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # prepare_net_devs 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # local -g is_hw=no 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # remove_spdk_ns 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # nvmf_veth_init 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:09:55.688 Cannot find device "nvmf_init_br" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:09:55.688 Cannot find device "nvmf_init_br2" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:09:55.688 Cannot find device "nvmf_tgt_br" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.688 Cannot find device "nvmf_tgt_br2" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:09:55.688 Cannot find device "nvmf_init_br" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:09:55.688 Cannot find device "nvmf_init_br2" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:09:55.688 Cannot find device "nvmf_tgt_br" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:09:55.688 Cannot find device "nvmf_tgt_br2" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:09:55.688 Cannot find device "nvmf_br" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:09:55.688 Cannot find device "nvmf_init_if" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:09:55.688 Cannot find device "nvmf_init_if2" 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.688 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:55.688 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.947 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:09:55.948 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.948 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:09:55.948 00:09:55.948 --- 10.0.0.3 ping statistics --- 00:09:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.948 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:09:55.948 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:55.948 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:55.948 00:09:55.948 --- 10.0.0.4 ping statistics --- 00:09:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.948 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:55.948 00:09:55.948 --- 10.0.0.1 ping statistics --- 00:09:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.948 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:55.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:09:55.948 00:09:55.948 --- 10.0.0.2 ping statistics --- 00:09:55.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.948 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@453 -- # return 0 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # nvmfpid=76380 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # waitforlisten 76380 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 76380 ']' 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:55.948 20:51:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:55.948 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:09:55.948 [2024-08-11 20:51:06.702997] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:09:55.948 [2024-08-11 20:51:06.703086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.207 [2024-08-11 20:51:06.842103] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.207 [2024-08-11 20:51:06.942240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.207 [2024-08-11 20:51:06.942636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.207 [2024-08-11 20:51:06.942960] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.207 [2024-08-11 20:51:06.943115] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.207 [2024-08-11 20:51:06.943229] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.207 [2024-08-11 20:51:06.943489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.207 [2024-08-11 20:51:06.943654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.207 [2024-08-11 20:51:06.943726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.207 [2024-08-11 20:51:06.943725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.466 [2024-08-11 20:51:07.002240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.044 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.329 [2024-08-11 20:51:07.819021] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.329 Malloc0 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.329 [2024-08-11 20:51:07.884857] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:57.329 test case1: single bdev can't be used in multiple subsystems 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:57.329 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 [2024-08-11 20:51:07.908712] bdev.c:8155:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:57.330 [2024-08-11 20:51:07.908934] subsystem.c:2101:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:57.330 [2024-08-11 20:51:07.909164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.330 request: 00:09:57.330 { 00:09:57.330 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:57.330 "namespace": { 00:09:57.330 "bdev_name": "Malloc0", 00:09:57.330 "no_auto_visible": false 00:09:57.330 }, 00:09:57.330 "method": "nvmf_subsystem_add_ns", 00:09:57.330 "req_id": 1 00:09:57.330 } 00:09:57.330 Got JSON-RPC error response 00:09:57.330 response: 00:09:57.330 { 00:09:57.330 "code": -32602, 00:09:57.330 "message": "Invalid parameters" 00:09:57.330 } 00:09:57.330 Adding namespace failed - expected result. 00:09:57.330 test case2: host connect to nvmf target in multiple paths 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@557 -- # xtrace_disable 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.330 [2024-08-11 20:51:07.920840] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:09:57.330 20:51:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:57.330 20:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:57.589 20:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.589 20:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:09:57.589 20:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.589 20:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:57.589 20:51:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:09:59.492 20:51:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:59.492 [global] 00:09:59.492 thread=1 00:09:59.492 invalidate=1 00:09:59.492 rw=write 00:09:59.492 time_based=1 00:09:59.492 runtime=1 00:09:59.492 ioengine=libaio 00:09:59.492 direct=1 00:09:59.492 bs=4096 00:09:59.492 iodepth=1 00:09:59.492 norandommap=0 00:09:59.492 numjobs=1 00:09:59.492 00:09:59.492 verify_dump=1 00:09:59.492 verify_backlog=512 00:09:59.492 verify_state_save=0 00:09:59.492 do_verify=1 00:09:59.492 verify=crc32c-intel 00:09:59.492 [job0] 00:09:59.492 filename=/dev/nvme0n1 00:09:59.492 Could not set queue depth (nvme0n1) 00:09:59.751 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.751 fio-3.35 00:09:59.751 Starting 1 thread 00:10:01.128 00:10:01.128 job0: (groupid=0, jobs=1): err= 0: pid=76472: Sun Aug 11 20:51:11 2024 00:10:01.128 read: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec) 00:10:01.128 slat (nsec): min=11757, max=54165, avg=14575.49, stdev=4866.91 00:10:01.128 clat (usec): min=135, max=312, avg=197.41, stdev=27.23 00:10:01.128 lat (usec): min=153, max=327, avg=211.99, stdev=27.98 00:10:01.128 clat percentiles (usec): 00:10:01.128 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:10:01.128 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:10:01.128 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 247], 00:10:01.128 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 310], 00:10:01.128 | 99.99th=[ 314] 00:10:01.128 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:01.128 slat (usec): min=13, max=142, avg=22.32, stdev= 7.71 00:10:01.128 clat (usec): min=82, max=266, avg=121.08, stdev=21.61 00:10:01.128 lat (usec): min=99, max=409, avg=143.40, stdev=23.64 00:10:01.128 clat percentiles (usec): 00:10:01.128 | 1.00th=[ 88], 5.00th=[ 93], 10.00th=[ 97], 20.00th=[ 103], 00:10:01.128 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 118], 60.00th=[ 123], 00:10:01.128 | 70.00th=[ 130], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 161], 00:10:01.128 | 99.00th=[ 184], 99.50th=[ 196], 99.90th=[ 217], 99.95th=[ 223], 00:10:01.128 | 99.99th=[ 269] 00:10:01.128 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:10:01.128 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:01.128 lat (usec) : 100=8.85%, 250=89.33%, 500=1.82% 00:10:01.128 cpu : usr=2.00%, sys=8.00%, ctx=5671, majf=0, minf=5 00:10:01.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.128 issued rwts: total=2599,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.128 00:10:01.128 Run status group 0 (all jobs): 00:10:01.128 READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=10.2MiB (10.6MB), run=1001-1001msec 00:10:01.128 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:01.128 00:10:01.128 Disk stats (read/write): 00:10:01.128 nvme0n1: ios=2508/2560, merge=0/0, ticks=510/350, in_queue=860, util=91.38% 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # nvmfcleanup 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.128 rmmod nvme_tcp 00:10:01.128 rmmod nvme_fabrics 00:10:01.128 rmmod nvme_keyring 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # '[' -n 76380 ']' 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # killprocess 76380 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 76380 ']' 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 76380 00:10:01.128 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76380 00:10:01.129 killing process with pid 76380 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76380' 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 76380 00:10:01.129 20:51:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 76380 00:10:01.387 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:10:01.387 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:10:01.387 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:10:01.387 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # iptr 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@783 -- # iptables-save 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@783 -- # iptables-restore 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:10:01.388 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.646 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # remove_spdk_ns 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # return 0 00:10:01.647 ************************************ 00:10:01.647 END TEST nvmf_nmic 00:10:01.647 ************************************ 00:10:01.647 00:10:01.647 real 0m6.240s 00:10:01.647 user 0m19.762s 00:10:01.647 sys 0m2.067s 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.647 ************************************ 00:10:01.647 START TEST nvmf_fio_target 00:10:01.647 ************************************ 00:10:01.647 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:01.906 * Looking for test storage... 00:10:01.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.906 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # prepare_net_devs 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # local -g is_hw=no 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # remove_spdk_ns 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # nvmf_veth_init 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:10:01.907 Cannot find device "nvmf_init_br" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:10:01.907 Cannot find device "nvmf_init_br2" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:10:01.907 Cannot find device "nvmf_tgt_br" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.907 Cannot find device "nvmf_tgt_br2" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:10:01.907 Cannot find device "nvmf_init_br" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:10:01.907 Cannot find device "nvmf_init_br2" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:10:01.907 Cannot find device "nvmf_tgt_br" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:10:01.907 Cannot find device "nvmf_tgt_br2" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:10:01.907 Cannot find device "nvmf_br" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:10:01.907 Cannot find device "nvmf_init_if" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:10:01.907 Cannot find device "nvmf_init_if2" 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.907 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:01.907 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:10:02.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:02.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:02.166 00:10:02.166 --- 10.0.0.3 ping statistics --- 00:10:02.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.166 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:02.166 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:10:02.166 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:02.166 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:10:02.166 00:10:02.166 --- 10.0.0.4 ping statistics --- 00:10:02.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.166 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:02.167 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:02.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:02.167 00:10:02.167 --- 10.0.0.1 ping statistics --- 00:10:02.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.167 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:02.167 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:02.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:10:02.426 00:10:02.426 --- 10.0.0.2 ping statistics --- 00:10:02.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.426 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@453 -- # return 0 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # nvmfpid=76696 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # waitforlisten 76696 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 76696 ']' 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:02.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:02.426 20:51:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.426 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:10:02.426 [2024-08-11 20:51:13.033747] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:10:02.426 [2024-08-11 20:51:13.033836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.426 [2024-08-11 20:51:13.173338] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.684 [2024-08-11 20:51:13.241461] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.684 [2024-08-11 20:51:13.241854] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.684 [2024-08-11 20:51:13.242097] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.684 [2024-08-11 20:51:13.242292] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.684 [2024-08-11 20:51:13.242346] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.684 [2024-08-11 20:51:13.242758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.684 [2024-08-11 20:51:13.242842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.684 [2024-08-11 20:51:13.242934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.684 [2024-08-11 20:51:13.242940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.684 [2024-08-11 20:51:13.303339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.684 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.942 [2024-08-11 20:51:13.714538] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.201 20:51:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.460 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:03.460 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.718 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:03.718 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.977 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:03.977 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.236 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:04.236 20:51:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:04.494 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.753 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:04.753 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.012 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:05.012 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.271 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:05.271 20:51:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:05.530 20:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.788 20:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:05.788 20:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.047 20:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:06.047 20:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:06.317 20:51:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:06.587 [2024-08-11 20:51:17.146462] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:06.587 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:06.845 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:10:07.104 20:51:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:10:09.636 20:51:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:09.636 [global] 00:10:09.636 thread=1 00:10:09.636 invalidate=1 00:10:09.636 rw=write 00:10:09.636 time_based=1 00:10:09.636 runtime=1 00:10:09.636 ioengine=libaio 00:10:09.636 direct=1 00:10:09.636 bs=4096 00:10:09.636 iodepth=1 00:10:09.636 norandommap=0 00:10:09.636 numjobs=1 00:10:09.636 00:10:09.636 verify_dump=1 00:10:09.636 verify_backlog=512 00:10:09.636 verify_state_save=0 00:10:09.636 do_verify=1 00:10:09.636 verify=crc32c-intel 00:10:09.636 [job0] 00:10:09.636 filename=/dev/nvme0n1 00:10:09.636 [job1] 00:10:09.636 filename=/dev/nvme0n2 00:10:09.636 [job2] 00:10:09.636 filename=/dev/nvme0n3 00:10:09.636 [job3] 00:10:09.636 filename=/dev/nvme0n4 00:10:09.636 Could not set queue depth (nvme0n1) 00:10:09.636 Could not set queue depth (nvme0n2) 00:10:09.636 Could not set queue depth (nvme0n3) 00:10:09.636 Could not set queue depth (nvme0n4) 00:10:09.636 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.636 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.636 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.636 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.636 fio-3.35 00:10:09.636 Starting 4 threads 00:10:10.571 00:10:10.572 job0: (groupid=0, jobs=1): err= 0: pid=76873: Sun Aug 11 20:51:21 2024 00:10:10.572 read: IOPS=858, BW=3433KiB/s (3515kB/s)(3436KiB/1001msec) 00:10:10.572 slat (nsec): min=16572, max=94859, avg=47556.29, stdev=13977.98 00:10:10.572 clat (usec): min=229, max=1245, avg=630.82, stdev=183.42 00:10:10.572 lat (usec): min=248, max=1311, avg=678.38, stdev=183.84 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 269], 5.00th=[ 429], 10.00th=[ 465], 20.00th=[ 498], 00:10:10.572 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:10:10.572 | 70.00th=[ 660], 80.00th=[ 758], 90.00th=[ 930], 95.00th=[ 1037], 00:10:10.572 | 99.00th=[ 1172], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:10.572 | 99.99th=[ 1254] 00:10:10.572 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:10.572 slat (nsec): min=27738, max=96410, avg=44006.04, stdev=10720.20 00:10:10.572 clat (usec): min=151, max=624, avg=354.53, stdev=126.06 00:10:10.572 lat (usec): min=184, max=669, avg=398.53, stdev=128.65 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 215], 00:10:10.572 | 30.00th=[ 258], 40.00th=[ 302], 50.00th=[ 375], 60.00th=[ 408], 00:10:10.572 | 70.00th=[ 437], 80.00th=[ 474], 90.00th=[ 523], 95.00th=[ 562], 00:10:10.572 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 619], 99.95th=[ 627], 00:10:10.572 | 99.99th=[ 627] 00:10:10.572 bw ( KiB/s): min= 4096, max= 4096, per=17.20%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.572 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.572 lat (usec) : 250=15.83%, 500=39.88%, 750=34.68%, 1000=6.48% 00:10:10.572 lat (msec) : 2=3.13% 00:10:10.572 cpu : usr=2.00%, sys=6.70%, ctx=1883, majf=0, minf=11 00:10:10.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 issued rwts: total=859,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.572 job1: (groupid=0, jobs=1): err= 0: pid=76874: Sun Aug 11 20:51:21 2024 00:10:10.572 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:10.572 slat (nsec): min=13067, max=72840, avg=22935.22, stdev=9365.10 00:10:10.572 clat (usec): min=138, max=7220, avg=475.81, stdev=427.64 00:10:10.572 lat (usec): min=154, max=7235, avg=498.74, stdev=430.65 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 208], 00:10:10.572 | 30.00th=[ 229], 40.00th=[ 265], 50.00th=[ 529], 60.00th=[ 570], 00:10:10.572 | 70.00th=[ 603], 80.00th=[ 652], 90.00th=[ 717], 95.00th=[ 775], 00:10:10.572 | 99.00th=[ 938], 99.50th=[ 3261], 99.90th=[ 5866], 99.95th=[ 7242], 00:10:10.572 | 99.99th=[ 7242] 00:10:10.572 write: IOPS=1349, BW=5399KiB/s (5528kB/s)(5404KiB/1001msec); 0 zone resets 00:10:10.572 slat (usec): min=16, max=143, avg=37.74, stdev=19.45 00:10:10.572 clat (usec): min=94, max=727, avg=318.92, stdev=157.60 00:10:10.572 lat (usec): min=115, max=772, avg=356.66, stdev=170.82 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 117], 5.00th=[ 133], 10.00th=[ 147], 20.00th=[ 167], 00:10:10.572 | 30.00th=[ 192], 40.00th=[ 233], 50.00th=[ 269], 60.00th=[ 330], 00:10:10.572 | 70.00th=[ 416], 80.00th=[ 506], 90.00th=[ 553], 95.00th=[ 586], 00:10:10.572 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 701], 99.95th=[ 725], 00:10:10.572 | 99.99th=[ 725] 00:10:10.572 bw ( KiB/s): min= 8192, max= 8192, per=34.40%, avg=8192.00, stdev= 0.00, samples=1 00:10:10.572 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:10.572 lat (usec) : 100=0.08%, 250=41.43%, 500=23.45%, 750=32.21%, 1000=2.40% 00:10:10.572 lat (msec) : 2=0.08%, 4=0.17%, 10=0.17% 00:10:10.572 cpu : usr=1.90%, sys=5.70%, ctx=2380, majf=0, minf=1 00:10:10.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 issued rwts: total=1024,1351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.572 job2: (groupid=0, jobs=1): err= 0: pid=76875: Sun Aug 11 20:51:21 2024 00:10:10.572 read: IOPS=2067, BW=8272KiB/s (8470kB/s)(8280KiB/1001msec) 00:10:10.572 slat (nsec): min=10753, max=43344, avg=14214.67, stdev=4139.04 00:10:10.572 clat (usec): min=149, max=356, avg=225.96, stdev=36.41 00:10:10.572 lat (usec): min=160, max=380, avg=240.18, stdev=36.62 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:10:10.572 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 231], 00:10:10.572 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 297], 00:10:10.572 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 355], 99.95th=[ 355], 00:10:10.572 | 99.99th=[ 355] 00:10:10.572 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:10.572 slat (nsec): min=13404, max=82394, avg=21690.25, stdev=6732.54 00:10:10.572 clat (usec): min=94, max=1955, avg=171.72, stdev=54.73 00:10:10.572 lat (usec): min=111, max=1984, avg=193.41, stdev=55.12 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 105], 5.00th=[ 117], 10.00th=[ 127], 20.00th=[ 141], 00:10:10.572 | 30.00th=[ 149], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 178], 00:10:10.572 | 70.00th=[ 188], 80.00th=[ 202], 90.00th=[ 219], 95.00th=[ 239], 00:10:10.572 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 363], 99.95th=[ 1254], 00:10:10.572 | 99.99th=[ 1958] 00:10:10.572 bw ( KiB/s): min=10192, max=10192, per=42.80%, avg=10192.00, stdev= 0.00, samples=1 00:10:10.572 iops : min= 2548, max= 2548, avg=2548.00, stdev= 0.00, samples=1 00:10:10.572 lat (usec) : 100=0.15%, 250=88.73%, 500=11.08% 00:10:10.572 lat (msec) : 2=0.04% 00:10:10.572 cpu : usr=1.40%, sys=6.90%, ctx=4630, majf=0, minf=7 00:10:10.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 issued rwts: total=2070,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.572 job3: (groupid=0, jobs=1): err= 0: pid=76876: Sun Aug 11 20:51:21 2024 00:10:10.572 read: IOPS=771, BW=3085KiB/s (3159kB/s)(3088KiB/1001msec) 00:10:10.572 slat (nsec): min=11635, max=78418, avg=25124.26, stdev=7887.06 00:10:10.572 clat (usec): min=379, max=1728, avg=595.11, stdev=89.68 00:10:10.572 lat (usec): min=394, max=1758, avg=620.24, stdev=89.78 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 416], 5.00th=[ 465], 10.00th=[ 502], 20.00th=[ 537], 00:10:10.572 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:10:10.572 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 734], 00:10:10.572 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 1729], 99.95th=[ 1729], 00:10:10.572 | 99.99th=[ 1729] 00:10:10.572 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:10.572 slat (usec): min=21, max=378, avg=52.54, stdev=21.43 00:10:10.572 clat (usec): min=169, max=3721, avg=449.31, stdev=167.58 00:10:10.572 lat (usec): min=204, max=3766, avg=501.85, stdev=175.68 00:10:10.572 clat percentiles (usec): 00:10:10.572 | 1.00th=[ 198], 5.00th=[ 235], 10.00th=[ 269], 20.00th=[ 310], 00:10:10.572 | 30.00th=[ 371], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 490], 00:10:10.572 | 70.00th=[ 523], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 668], 00:10:10.572 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 857], 99.95th=[ 3720], 00:10:10.572 | 99.99th=[ 3720] 00:10:10.572 bw ( KiB/s): min= 4096, max= 4096, per=17.20%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.572 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.572 lat (usec) : 250=4.18%, 500=35.80%, 750=57.46%, 1000=2.45% 00:10:10.572 lat (msec) : 2=0.06%, 4=0.06% 00:10:10.572 cpu : usr=1.80%, sys=5.70%, ctx=1799, majf=0, minf=17 00:10:10.572 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.572 issued rwts: total=772,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.572 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.572 00:10:10.572 Run status group 0 (all jobs): 00:10:10.572 READ: bw=18.4MiB/s (19.3MB/s), 3085KiB/s-8272KiB/s (3159kB/s-8470kB/s), io=18.5MiB (19.4MB), run=1001-1001msec 00:10:10.572 WRITE: bw=23.3MiB/s (24.4MB/s), 4092KiB/s-9.99MiB/s (4190kB/s-10.5MB/s), io=23.3MiB (24.4MB), run=1001-1001msec 00:10:10.572 00:10:10.572 Disk stats (read/write): 00:10:10.572 nvme0n1: ios=694/1024, merge=0/0, ticks=431/370, in_queue=801, util=86.95% 00:10:10.572 nvme0n2: ios=1024/1057, merge=0/0, ticks=476/301, in_queue=777, util=85.99% 00:10:10.572 nvme0n3: ios=1867/2048, merge=0/0, ticks=441/380, in_queue=821, util=89.03% 00:10:10.572 nvme0n4: ios=537/1024, merge=0/0, ticks=313/470, in_queue=783, util=89.59% 00:10:10.572 20:51:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:10.572 [global] 00:10:10.572 thread=1 00:10:10.572 invalidate=1 00:10:10.572 rw=randwrite 00:10:10.572 time_based=1 00:10:10.572 runtime=1 00:10:10.572 ioengine=libaio 00:10:10.572 direct=1 00:10:10.572 bs=4096 00:10:10.572 iodepth=1 00:10:10.572 norandommap=0 00:10:10.572 numjobs=1 00:10:10.572 00:10:10.572 verify_dump=1 00:10:10.572 verify_backlog=512 00:10:10.572 verify_state_save=0 00:10:10.572 do_verify=1 00:10:10.572 verify=crc32c-intel 00:10:10.572 [job0] 00:10:10.572 filename=/dev/nvme0n1 00:10:10.572 [job1] 00:10:10.572 filename=/dev/nvme0n2 00:10:10.572 [job2] 00:10:10.572 filename=/dev/nvme0n3 00:10:10.572 [job3] 00:10:10.572 filename=/dev/nvme0n4 00:10:10.830 Could not set queue depth (nvme0n1) 00:10:10.830 Could not set queue depth (nvme0n2) 00:10:10.830 Could not set queue depth (nvme0n3) 00:10:10.830 Could not set queue depth (nvme0n4) 00:10:10.830 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.830 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.830 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.830 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.830 fio-3.35 00:10:10.830 Starting 4 threads 00:10:12.208 00:10:12.208 job0: (groupid=0, jobs=1): err= 0: pid=76931: Sun Aug 11 20:51:22 2024 00:10:12.208 read: IOPS=1477, BW=5910KiB/s (6052kB/s)(5916KiB/1001msec) 00:10:12.208 slat (usec): min=7, max=253, avg=15.57, stdev=11.18 00:10:12.208 clat (usec): min=165, max=1192, avg=344.78, stdev=101.64 00:10:12.208 lat (usec): min=175, max=1200, avg=360.35, stdev=105.81 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 269], 00:10:12.208 | 30.00th=[ 277], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 326], 00:10:12.208 | 70.00th=[ 383], 80.00th=[ 433], 90.00th=[ 490], 95.00th=[ 545], 00:10:12.208 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 1156], 99.95th=[ 1188], 00:10:12.208 | 99.99th=[ 1188] 00:10:12.208 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:12.208 slat (usec): min=9, max=144, avg=20.47, stdev= 9.89 00:10:12.208 clat (usec): min=104, max=659, avg=280.00, stdev=107.19 00:10:12.208 lat (usec): min=129, max=686, avg=300.47, stdev=111.58 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 200], 00:10:12.208 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 231], 60.00th=[ 251], 00:10:12.208 | 70.00th=[ 297], 80.00th=[ 379], 90.00th=[ 465], 95.00th=[ 498], 00:10:12.208 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 644], 99.95th=[ 660], 00:10:12.208 | 99.99th=[ 660] 00:10:12.208 bw ( KiB/s): min= 8175, max= 8175, per=24.97%, avg=8175.00, stdev= 0.00, samples=1 00:10:12.208 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:12.208 lat (usec) : 250=33.33%, 500=59.97%, 750=6.60%, 1000=0.03% 00:10:12.208 lat (msec) : 2=0.07% 00:10:12.208 cpu : usr=1.80%, sys=4.20%, ctx=3028, majf=0, minf=15 00:10:12.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 issued rwts: total=1479,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.208 job1: (groupid=0, jobs=1): err= 0: pid=76935: Sun Aug 11 20:51:22 2024 00:10:12.208 read: IOPS=2130, BW=8523KiB/s (8728kB/s)(8532KiB/1001msec) 00:10:12.208 slat (nsec): min=10820, max=64805, avg=16823.72, stdev=7127.43 00:10:12.208 clat (usec): min=142, max=422, avg=222.36, stdev=41.87 00:10:12.208 lat (usec): min=154, max=435, avg=239.18, stdev=42.33 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 188], 00:10:12.208 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 227], 00:10:12.208 | 70.00th=[ 237], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 306], 00:10:12.208 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 388], 00:10:12.208 | 99.99th=[ 424] 00:10:12.208 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:12.208 slat (usec): min=15, max=112, avg=26.76, stdev=10.40 00:10:12.208 clat (usec): min=98, max=2075, avg=160.82, stdev=54.25 00:10:12.208 lat (usec): min=120, max=2128, avg=187.58, stdev=55.24 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 106], 5.00th=[ 116], 10.00th=[ 122], 20.00th=[ 130], 00:10:12.208 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 163], 00:10:12.208 | 70.00th=[ 174], 80.00th=[ 186], 90.00th=[ 206], 95.00th=[ 227], 00:10:12.208 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 709], 99.95th=[ 775], 00:10:12.208 | 99.99th=[ 2073] 00:10:12.208 bw ( KiB/s): min=12288, max=12288, per=37.54%, avg=12288.00, stdev= 0.00, samples=1 00:10:12.208 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:12.208 lat (usec) : 100=0.06%, 250=88.51%, 500=11.34%, 750=0.04%, 1000=0.02% 00:10:12.208 lat (msec) : 4=0.02% 00:10:12.208 cpu : usr=2.10%, sys=8.40%, ctx=4699, majf=0, minf=11 00:10:12.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 issued rwts: total=2133,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.208 job2: (groupid=0, jobs=1): err= 0: pid=76941: Sun Aug 11 20:51:22 2024 00:10:12.208 read: IOPS=1481, BW=5926KiB/s (6068kB/s)(5932KiB/1001msec) 00:10:12.208 slat (nsec): min=7172, max=64271, avg=13353.29, stdev=6489.73 00:10:12.208 clat (usec): min=185, max=1180, avg=346.65, stdev=101.47 00:10:12.208 lat (usec): min=195, max=1192, avg=360.00, stdev=104.72 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:10:12.208 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 326], 00:10:12.208 | 70.00th=[ 388], 80.00th=[ 437], 90.00th=[ 490], 95.00th=[ 545], 00:10:12.208 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 1057], 99.95th=[ 1188], 00:10:12.208 | 99.99th=[ 1188] 00:10:12.208 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:12.208 slat (usec): min=9, max=127, avg=25.36, stdev=14.06 00:10:12.208 clat (usec): min=97, max=640, avg=274.00, stdev=100.36 00:10:12.208 lat (usec): min=135, max=683, avg=299.36, stdev=110.48 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:10:12.208 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 245], 00:10:12.208 | 70.00th=[ 285], 80.00th=[ 363], 90.00th=[ 449], 95.00th=[ 482], 00:10:12.208 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 619], 99.95th=[ 644], 00:10:12.208 | 99.99th=[ 644] 00:10:12.208 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:12.208 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:12.208 lat (usec) : 100=0.03%, 250=34.15%, 500=60.12%, 750=5.60%, 1000=0.03% 00:10:12.208 lat (msec) : 2=0.07% 00:10:12.208 cpu : usr=1.30%, sys=5.20%, ctx=3031, majf=0, minf=15 00:10:12.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 issued rwts: total=1483,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.208 job3: (groupid=0, jobs=1): err= 0: pid=76942: Sun Aug 11 20:51:22 2024 00:10:12.208 read: IOPS=2229, BW=8919KiB/s (9133kB/s)(8928KiB/1001msec) 00:10:12.208 slat (nsec): min=10201, max=46828, avg=13164.83, stdev=4042.05 00:10:12.208 clat (usec): min=144, max=394, avg=222.75, stdev=42.90 00:10:12.208 lat (usec): min=158, max=404, avg=235.91, stdev=42.95 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 186], 00:10:12.208 | 30.00th=[ 194], 40.00th=[ 202], 50.00th=[ 215], 60.00th=[ 227], 00:10:12.208 | 70.00th=[ 241], 80.00th=[ 260], 90.00th=[ 285], 95.00th=[ 306], 00:10:12.208 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 383], 00:10:12.208 | 99.99th=[ 396] 00:10:12.208 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:12.208 slat (nsec): min=12991, max=90442, avg=20585.87, stdev=6290.45 00:10:12.208 clat (usec): min=101, max=3152, avg=161.35, stdev=100.33 00:10:12.208 lat (usec): min=117, max=3183, avg=181.94, stdev=101.02 00:10:12.208 clat percentiles (usec): 00:10:12.208 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 129], 00:10:12.208 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 159], 00:10:12.208 | 70.00th=[ 169], 80.00th=[ 184], 90.00th=[ 206], 95.00th=[ 231], 00:10:12.208 | 99.00th=[ 273], 99.50th=[ 314], 99.90th=[ 2147], 99.95th=[ 2933], 00:10:12.208 | 99.99th=[ 3163] 00:10:12.208 bw ( KiB/s): min=12263, max=12263, per=37.46%, avg=12263.00, stdev= 0.00, samples=1 00:10:12.208 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:12.208 lat (usec) : 250=87.17%, 500=12.65%, 750=0.10% 00:10:12.208 lat (msec) : 2=0.02%, 4=0.06% 00:10:12.208 cpu : usr=1.40%, sys=6.90%, ctx=4792, majf=0, minf=7 00:10:12.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.208 issued rwts: total=2232,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.208 00:10:12.208 Run status group 0 (all jobs): 00:10:12.208 READ: bw=28.6MiB/s (30.0MB/s), 5910KiB/s-8919KiB/s (6052kB/s-9133kB/s), io=28.6MiB (30.0MB), run=1001-1001msec 00:10:12.208 WRITE: bw=32.0MiB/s (33.5MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:12.208 00:10:12.208 Disk stats (read/write): 00:10:12.208 nvme0n1: ios=1217/1536, merge=0/0, ticks=403/418, in_queue=821, util=86.86% 00:10:12.208 nvme0n2: ios=2027/2048, merge=0/0, ticks=483/360, in_queue=843, util=87.60% 00:10:12.208 nvme0n3: ios=1170/1536, merge=0/0, ticks=382/436, in_queue=818, util=89.07% 00:10:12.208 nvme0n4: ios=2048/2137, merge=0/0, ticks=481/365, in_queue=846, util=89.63% 00:10:12.208 20:51:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:12.208 [global] 00:10:12.208 thread=1 00:10:12.208 invalidate=1 00:10:12.208 rw=write 00:10:12.208 time_based=1 00:10:12.208 runtime=1 00:10:12.208 ioengine=libaio 00:10:12.208 direct=1 00:10:12.208 bs=4096 00:10:12.208 iodepth=128 00:10:12.208 norandommap=0 00:10:12.208 numjobs=1 00:10:12.208 00:10:12.208 verify_dump=1 00:10:12.208 verify_backlog=512 00:10:12.208 verify_state_save=0 00:10:12.208 do_verify=1 00:10:12.208 verify=crc32c-intel 00:10:12.208 [job0] 00:10:12.208 filename=/dev/nvme0n1 00:10:12.208 [job1] 00:10:12.208 filename=/dev/nvme0n2 00:10:12.208 [job2] 00:10:12.208 filename=/dev/nvme0n3 00:10:12.208 [job3] 00:10:12.208 filename=/dev/nvme0n4 00:10:12.208 Could not set queue depth (nvme0n1) 00:10:12.208 Could not set queue depth (nvme0n2) 00:10:12.209 Could not set queue depth (nvme0n3) 00:10:12.209 Could not set queue depth (nvme0n4) 00:10:12.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.209 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.209 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.209 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.209 fio-3.35 00:10:12.209 Starting 4 threads 00:10:13.586 00:10:13.586 job0: (groupid=0, jobs=1): err= 0: pid=76996: Sun Aug 11 20:51:24 2024 00:10:13.586 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:10:13.586 slat (usec): min=6, max=30574, avg=350.24, stdev=2116.51 00:10:13.586 clat (usec): min=19295, max=92647, avg=43094.03, stdev=18813.07 00:10:13.586 lat (usec): min=24319, max=92674, avg=43444.26, stdev=18855.91 00:10:13.586 clat percentiles (usec): 00:10:13.586 | 1.00th=[21890], 5.00th=[24773], 10.00th=[26346], 20.00th=[27132], 00:10:13.586 | 30.00th=[27919], 40.00th=[32637], 50.00th=[39584], 60.00th=[42206], 00:10:13.586 | 70.00th=[44827], 80.00th=[56886], 90.00th=[77071], 95.00th=[88605], 00:10:13.586 | 99.00th=[92799], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:10:13.586 | 99.99th=[92799] 00:10:13.586 write: IOPS=1943, BW=7773KiB/s (7960kB/s)(7812KiB/1005msec); 0 zone resets 00:10:13.586 slat (usec): min=17, max=14294, avg=230.12, stdev=1237.85 00:10:13.586 clat (usec): min=1444, max=74197, avg=30523.46, stdev=12801.32 00:10:13.586 lat (usec): min=4976, max=74236, avg=30753.57, stdev=12790.28 00:10:13.586 clat percentiles (usec): 00:10:13.586 | 1.00th=[ 5473], 5.00th=[17433], 10.00th=[20841], 20.00th=[21890], 00:10:13.586 | 30.00th=[23200], 40.00th=[23987], 50.00th=[27919], 60.00th=[29230], 00:10:13.586 | 70.00th=[30016], 80.00th=[38011], 90.00th=[53216], 95.00th=[56886], 00:10:13.586 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:10:13.586 | 99.99th=[73925] 00:10:13.586 bw ( KiB/s): min= 6920, max= 7680, per=16.86%, avg=7300.00, stdev=537.40, samples=2 00:10:13.586 iops : min= 1730, max= 1920, avg=1825.00, stdev=134.35, samples=2 00:10:13.586 lat (msec) : 2=0.03%, 10=0.92%, 20=2.67%, 50=79.28%, 100=17.11% 00:10:13.586 cpu : usr=1.39%, sys=6.27%, ctx=112, majf=0, minf=15 00:10:13.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:10:13.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.586 issued rwts: total=1536,1953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.586 job1: (groupid=0, jobs=1): err= 0: pid=76997: Sun Aug 11 20:51:24 2024 00:10:13.586 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:13.586 slat (usec): min=6, max=5591, avg=132.25, stdev=544.12 00:10:13.586 clat (usec): min=13721, max=23926, avg=17482.63, stdev=1498.07 00:10:13.586 lat (usec): min=13744, max=23961, avg=17614.88, stdev=1564.10 00:10:13.586 clat percentiles (usec): 00:10:13.586 | 1.00th=[13960], 5.00th=[15139], 10.00th=[15926], 20.00th=[16450], 00:10:13.586 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17433], 60.00th=[17695], 00:10:13.586 | 70.00th=[17957], 80.00th=[18220], 90.00th=[19268], 95.00th=[20317], 00:10:13.586 | 99.00th=[21890], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:10:13.586 | 99.99th=[23987] 00:10:13.586 write: IOPS=3825, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1004msec); 0 zone resets 00:10:13.586 slat (usec): min=12, max=5189, avg=128.99, stdev=616.34 00:10:13.586 clat (usec): min=316, max=23821, avg=16638.77, stdev=2094.28 00:10:13.586 lat (usec): min=4192, max=23848, avg=16767.76, stdev=2161.20 00:10:13.586 clat percentiles (usec): 00:10:13.586 | 1.00th=[ 5276], 5.00th=[14353], 10.00th=[15008], 20.00th=[15664], 00:10:13.586 | 30.00th=[15926], 40.00th=[16188], 50.00th=[16712], 60.00th=[16909], 00:10:13.586 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18744], 95.00th=[20055], 00:10:13.586 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:10:13.586 | 99.99th=[23725] 00:10:13.586 bw ( KiB/s): min=13320, max=16416, per=34.33%, avg=14868.00, stdev=2189.20, samples=2 00:10:13.586 iops : min= 3330, max= 4104, avg=3717.00, stdev=547.30, samples=2 00:10:13.586 lat (usec) : 500=0.01% 00:10:13.586 lat (msec) : 10=0.57%, 20=93.21%, 50=6.21% 00:10:13.586 cpu : usr=3.89%, sys=11.86%, ctx=312, majf=0, minf=7 00:10:13.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:13.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.586 issued rwts: total=3584,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.586 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.586 job2: (groupid=0, jobs=1): err= 0: pid=76998: Sun Aug 11 20:51:24 2024 00:10:13.586 read: IOPS=2974, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:10:13.586 slat (usec): min=7, max=6433, avg=162.54, stdev=817.07 00:10:13.586 clat (usec): min=336, max=24473, avg=20785.31, stdev=2726.98 00:10:13.586 lat (usec): min=4209, max=24503, avg=20947.84, stdev=2610.86 00:10:13.586 clat percentiles (usec): 00:10:13.586 | 1.00th=[ 4883], 5.00th=[17171], 10.00th=[19006], 20.00th=[19792], 00:10:13.586 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:10:13.586 | 70.00th=[22152], 80.00th=[22414], 90.00th=[22938], 95.00th=[23200], 00:10:13.586 | 99.00th=[24249], 99.50th=[24511], 99.90th=[24511], 99.95th=[24511], 00:10:13.586 | 99.99th=[24511] 00:10:13.586 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:13.586 slat (usec): min=12, max=8788, avg=159.37, stdev=755.52 00:10:13.586 clat (usec): min=14826, max=26364, avg=20894.17, stdev=1515.18 00:10:13.586 lat (usec): min=16704, max=26385, avg=21053.54, stdev=1321.42 00:10:13.586 clat percentiles (usec): 00:10:13.586 | 1.00th=[16319], 5.00th=[18744], 10.00th=[19268], 20.00th=[20055], 00:10:13.586 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:10:13.586 | 70.00th=[21365], 80.00th=[21890], 90.00th=[22152], 95.00th=[22676], 00:10:13.586 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:10:13.586 | 99.99th=[26346] 00:10:13.586 bw ( KiB/s): min=12288, max=12312, per=28.40%, avg=12300.00, stdev=16.97, samples=2 00:10:13.586 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:13.586 lat (usec) : 500=0.02% 00:10:13.586 lat (msec) : 10=1.06%, 20=20.65%, 50=78.28% 00:10:13.586 cpu : usr=4.10%, sys=8.30%, ctx=192, majf=0, minf=16 00:10:13.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:13.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.587 issued rwts: total=2977,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.587 job3: (groupid=0, jobs=1): err= 0: pid=76999: Sun Aug 11 20:51:24 2024 00:10:13.587 read: IOPS=1562, BW=6250KiB/s (6400kB/s)(6300KiB/1008msec) 00:10:13.587 slat (usec): min=6, max=12087, avg=280.20, stdev=1284.36 00:10:13.587 clat (usec): min=5157, max=59946, avg=35130.35, stdev=6811.31 00:10:13.587 lat (usec): min=9989, max=59986, avg=35410.56, stdev=6781.15 00:10:13.587 clat percentiles (usec): 00:10:13.587 | 1.00th=[16057], 5.00th=[26346], 10.00th=[28705], 20.00th=[29754], 00:10:13.587 | 30.00th=[31589], 40.00th=[32375], 50.00th=[33162], 60.00th=[34866], 00:10:13.587 | 70.00th=[38536], 80.00th=[42730], 90.00th=[44303], 95.00th=[44827], 00:10:13.587 | 99.00th=[50070], 99.50th=[53216], 99.90th=[54789], 99.95th=[60031], 00:10:13.587 | 99.99th=[60031] 00:10:13.587 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:10:13.587 slat (usec): min=13, max=15350, avg=265.88, stdev=1334.02 00:10:13.587 clat (usec): min=13103, max=88465, avg=34873.29, stdev=18180.57 00:10:13.587 lat (usec): min=13127, max=88494, avg=35139.17, stdev=18312.16 00:10:13.587 clat percentiles (usec): 00:10:13.587 | 1.00th=[13435], 5.00th=[20841], 10.00th=[21365], 20.00th=[22414], 00:10:13.587 | 30.00th=[23462], 40.00th=[24249], 50.00th=[26870], 60.00th=[28705], 00:10:13.587 | 70.00th=[31327], 80.00th=[54264], 90.00th=[66323], 95.00th=[76022], 00:10:13.587 | 99.00th=[84411], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:10:13.587 | 99.99th=[88605] 00:10:13.587 bw ( KiB/s): min= 7480, max= 8208, per=18.11%, avg=7844.00, stdev=514.77, samples=2 00:10:13.587 iops : min= 1870, max= 2052, avg=1961.00, stdev=128.69, samples=2 00:10:13.587 lat (msec) : 10=0.06%, 20=2.73%, 50=83.52%, 100=13.69% 00:10:13.587 cpu : usr=2.68%, sys=5.36%, ctx=159, majf=0, minf=15 00:10:13.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:10:13.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.587 issued rwts: total=1575,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.587 00:10:13.587 Run status group 0 (all jobs): 00:10:13.587 READ: bw=37.5MiB/s (39.3MB/s), 6113KiB/s-13.9MiB/s (6260kB/s-14.6MB/s), io=37.8MiB (39.6MB), run=1001-1008msec 00:10:13.587 WRITE: bw=42.3MiB/s (44.3MB/s), 7773KiB/s-14.9MiB/s (7960kB/s-15.7MB/s), io=42.6MiB (44.7MB), run=1001-1008msec 00:10:13.587 00:10:13.587 Disk stats (read/write): 00:10:13.587 nvme0n1: ios=1266/1536, merge=0/0, ticks=14878/11271, in_queue=26149, util=88.88% 00:10:13.587 nvme0n2: ios=3121/3359, merge=0/0, ticks=16995/16120, in_queue=33115, util=88.66% 00:10:13.587 nvme0n3: ios=2560/2656, merge=0/0, ticks=12652/12508, in_queue=25160, util=89.07% 00:10:13.587 nvme0n4: ios=1536/1767, merge=0/0, ticks=26995/24419, in_queue=51414, util=89.63% 00:10:13.587 20:51:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:13.587 [global] 00:10:13.587 thread=1 00:10:13.587 invalidate=1 00:10:13.587 rw=randwrite 00:10:13.587 time_based=1 00:10:13.587 runtime=1 00:10:13.587 ioengine=libaio 00:10:13.587 direct=1 00:10:13.587 bs=4096 00:10:13.587 iodepth=128 00:10:13.587 norandommap=0 00:10:13.587 numjobs=1 00:10:13.587 00:10:13.587 verify_dump=1 00:10:13.587 verify_backlog=512 00:10:13.587 verify_state_save=0 00:10:13.587 do_verify=1 00:10:13.587 verify=crc32c-intel 00:10:13.587 [job0] 00:10:13.587 filename=/dev/nvme0n1 00:10:13.587 [job1] 00:10:13.587 filename=/dev/nvme0n2 00:10:13.587 [job2] 00:10:13.587 filename=/dev/nvme0n3 00:10:13.587 [job3] 00:10:13.587 filename=/dev/nvme0n4 00:10:13.587 Could not set queue depth (nvme0n1) 00:10:13.587 Could not set queue depth (nvme0n2) 00:10:13.587 Could not set queue depth (nvme0n3) 00:10:13.587 Could not set queue depth (nvme0n4) 00:10:13.587 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.587 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.587 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.587 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.587 fio-3.35 00:10:13.587 Starting 4 threads 00:10:14.965 00:10:14.965 job0: (groupid=0, jobs=1): err= 0: pid=77054: Sun Aug 11 20:51:25 2024 00:10:14.965 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:10:14.965 slat (usec): min=6, max=16484, avg=187.38, stdev=1067.08 00:10:14.965 clat (usec): min=13304, max=67436, avg=23435.00, stdev=7375.59 00:10:14.965 lat (usec): min=13342, max=67462, avg=23622.37, stdev=7479.10 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[14091], 5.00th=[15926], 10.00th=[17171], 20.00th=[17695], 00:10:14.965 | 30.00th=[18220], 40.00th=[19792], 50.00th=[22676], 60.00th=[23200], 00:10:14.965 | 70.00th=[24511], 80.00th=[28967], 90.00th=[33424], 95.00th=[34866], 00:10:14.965 | 99.00th=[53740], 99.50th=[63177], 99.90th=[67634], 99.95th=[67634], 00:10:14.965 | 99.99th=[67634] 00:10:14.965 write: IOPS=2702, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1009msec); 0 zone resets 00:10:14.965 slat (usec): min=14, max=9482, avg=182.61, stdev=803.07 00:10:14.965 clat (usec): min=5024, max=76994, avg=24649.16, stdev=14380.23 00:10:14.965 lat (usec): min=9323, max=77033, avg=24831.77, stdev=14469.24 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[13960], 5.00th=[14877], 10.00th=[15533], 20.00th=[16450], 00:10:14.965 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17957], 60.00th=[19268], 00:10:14.965 | 70.00th=[20317], 80.00th=[31589], 90.00th=[47973], 95.00th=[60556], 00:10:14.965 | 99.00th=[70779], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:10:14.965 | 99.99th=[77071] 00:10:14.965 bw ( KiB/s): min= 8192, max=12600, per=26.50%, avg=10396.00, stdev=3116.93, samples=2 00:10:14.965 iops : min= 2048, max= 3150, avg=2599.00, stdev=779.23, samples=2 00:10:14.965 lat (msec) : 10=0.17%, 20=54.38%, 50=39.91%, 100=5.54% 00:10:14.965 cpu : usr=2.88%, sys=9.23%, ctx=234, majf=0, minf=10 00:10:14.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:14.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.965 issued rwts: total=2560,2727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.965 job1: (groupid=0, jobs=1): err= 0: pid=77055: Sun Aug 11 20:51:25 2024 00:10:14.965 read: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1007msec) 00:10:14.965 slat (usec): min=8, max=16842, avg=189.15, stdev=1104.44 00:10:14.965 clat (usec): min=1757, max=65702, avg=24407.37, stdev=10221.60 00:10:14.965 lat (usec): min=6897, max=65716, avg=24596.52, stdev=10240.19 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[ 7439], 5.00th=[16712], 10.00th=[17433], 20.00th=[17957], 00:10:14.965 | 30.00th=[18482], 40.00th=[19268], 50.00th=[20579], 60.00th=[21627], 00:10:14.965 | 70.00th=[25822], 80.00th=[30802], 90.00th=[35390], 95.00th=[47449], 00:10:14.965 | 99.00th=[65799], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:10:14.965 | 99.99th=[65799] 00:10:14.965 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:14.965 slat (usec): min=10, max=8756, avg=145.13, stdev=696.58 00:10:14.965 clat (usec): min=12293, max=27502, avg=18890.64, stdev=3030.00 00:10:14.965 lat (usec): min=15365, max=28414, avg=19035.77, stdev=2970.85 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[13698], 5.00th=[16057], 10.00th=[16319], 20.00th=[16712], 00:10:14.965 | 30.00th=[16909], 40.00th=[17171], 50.00th=[17695], 60.00th=[18220], 00:10:14.965 | 70.00th=[19268], 80.00th=[22414], 90.00th=[24511], 95.00th=[25035], 00:10:14.965 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:10:14.965 | 99.99th=[27395] 00:10:14.965 bw ( KiB/s): min=11256, max=13346, per=31.36%, avg=12301.00, stdev=1477.85, samples=2 00:10:14.965 iops : min= 2814, max= 3336, avg=3075.00, stdev=369.11, samples=2 00:10:14.965 lat (msec) : 2=0.02%, 10=0.54%, 20=58.99%, 50=38.33%, 100=2.12% 00:10:14.965 cpu : usr=3.18%, sys=8.75%, ctx=187, majf=0, minf=12 00:10:14.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:14.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.965 issued rwts: total=2817,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.965 job2: (groupid=0, jobs=1): err= 0: pid=77056: Sun Aug 11 20:51:25 2024 00:10:14.965 read: IOPS=1685, BW=6741KiB/s (6903kB/s)(6788KiB/1007msec) 00:10:14.965 slat (usec): min=9, max=11129, avg=267.72, stdev=1412.34 00:10:14.965 clat (usec): min=2015, max=43435, avg=33424.99, stdev=6485.96 00:10:14.965 lat (usec): min=9522, max=43457, avg=33692.71, stdev=6360.67 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[ 9896], 5.00th=[24249], 10.00th=[29754], 20.00th=[30802], 00:10:14.965 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:10:14.965 | 70.00th=[33817], 80.00th=[41157], 90.00th=[42206], 95.00th=[43254], 00:10:14.965 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:14.965 | 99.99th=[43254] 00:10:14.965 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:10:14.965 slat (usec): min=13, max=10729, avg=261.31, stdev=1344.14 00:10:14.965 clat (usec): min=22844, max=42125, avg=33653.04, stdev=4349.28 00:10:14.965 lat (usec): min=29646, max=42155, avg=33914.35, stdev=4170.15 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[24249], 5.00th=[29754], 10.00th=[30278], 20.00th=[30540], 00:10:14.965 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[32375], 00:10:14.965 | 70.00th=[33817], 80.00th=[40109], 90.00th=[40633], 95.00th=[41157], 00:10:14.965 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:14.965 | 99.99th=[42206] 00:10:14.965 bw ( KiB/s): min= 8192, max= 8208, per=20.90%, avg=8200.00, stdev=11.31, samples=2 00:10:14.965 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:10:14.965 lat (msec) : 4=0.03%, 10=0.64%, 20=1.07%, 50=98.26% 00:10:14.965 cpu : usr=1.59%, sys=5.85%, ctx=121, majf=0, minf=15 00:10:14.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:10:14.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.965 issued rwts: total=1697,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.965 job3: (groupid=0, jobs=1): err= 0: pid=77057: Sun Aug 11 20:51:25 2024 00:10:14.965 read: IOPS=1686, BW=6748KiB/s (6909kB/s)(6788KiB/1006msec) 00:10:14.965 slat (usec): min=9, max=11069, avg=268.45, stdev=1415.41 00:10:14.965 clat (usec): min=1039, max=43559, avg=33230.86, stdev=6636.47 00:10:14.965 lat (usec): min=7718, max=43572, avg=33499.31, stdev=6514.99 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[ 8029], 5.00th=[24249], 10.00th=[29492], 20.00th=[30540], 00:10:14.965 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[32113], 00:10:14.965 | 70.00th=[33162], 80.00th=[40633], 90.00th=[42730], 95.00th=[43254], 00:10:14.965 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:14.965 | 99.99th=[43779] 00:10:14.965 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:10:14.965 slat (usec): min=12, max=11000, avg=260.14, stdev=1340.81 00:10:14.965 clat (usec): min=22363, max=42567, avg=33799.75, stdev=4396.82 00:10:14.965 lat (usec): min=29363, max=42593, avg=34059.90, stdev=4220.54 00:10:14.965 clat percentiles (usec): 00:10:14.965 | 1.00th=[24249], 5.00th=[29754], 10.00th=[30278], 20.00th=[30802], 00:10:14.965 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32637], 00:10:14.965 | 70.00th=[33817], 80.00th=[40633], 90.00th=[41157], 95.00th=[41681], 00:10:14.965 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:14.965 | 99.99th=[42730] 00:10:14.965 bw ( KiB/s): min= 8192, max= 8208, per=20.90%, avg=8200.00, stdev=11.31, samples=2 00:10:14.965 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:10:14.965 lat (msec) : 2=0.03%, 10=0.85%, 20=0.85%, 50=98.26% 00:10:14.965 cpu : usr=1.59%, sys=6.27%, ctx=118, majf=0, minf=11 00:10:14.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:10:14.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.965 issued rwts: total=1697,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.965 00:10:14.965 Run status group 0 (all jobs): 00:10:14.965 READ: bw=34.0MiB/s (35.6MB/s), 6741KiB/s-10.9MiB/s (6903kB/s-11.5MB/s), io=34.3MiB (35.9MB), run=1006-1009msec 00:10:14.965 WRITE: bw=38.3MiB/s (40.2MB/s), 8135KiB/s-11.9MiB/s (8330kB/s-12.5MB/s), io=38.7MiB (40.5MB), run=1006-1009msec 00:10:14.965 00:10:14.965 Disk stats (read/write): 00:10:14.966 nvme0n1: ios=2129/2560, merge=0/0, ticks=18788/23771, in_queue=42559, util=88.05% 00:10:14.966 nvme0n2: ios=2516/2560, merge=0/0, ticks=14660/10126, in_queue=24786, util=87.87% 00:10:14.966 nvme0n3: ios=1536/1600, merge=0/0, ticks=11957/12164, in_queue=24121, util=88.89% 00:10:14.966 nvme0n4: ios=1536/1600, merge=0/0, ticks=12211/12761, in_queue=24972, util=89.54% 00:10:14.966 20:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:14.966 20:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77070 00:10:14.966 20:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:14.966 20:51:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:14.966 [global] 00:10:14.966 thread=1 00:10:14.966 invalidate=1 00:10:14.966 rw=read 00:10:14.966 time_based=1 00:10:14.966 runtime=10 00:10:14.966 ioengine=libaio 00:10:14.966 direct=1 00:10:14.966 bs=4096 00:10:14.966 iodepth=1 00:10:14.966 norandommap=1 00:10:14.966 numjobs=1 00:10:14.966 00:10:14.966 [job0] 00:10:14.966 filename=/dev/nvme0n1 00:10:14.966 [job1] 00:10:14.966 filename=/dev/nvme0n2 00:10:14.966 [job2] 00:10:14.966 filename=/dev/nvme0n3 00:10:14.966 [job3] 00:10:14.966 filename=/dev/nvme0n4 00:10:14.966 Could not set queue depth (nvme0n1) 00:10:14.966 Could not set queue depth (nvme0n2) 00:10:14.966 Could not set queue depth (nvme0n3) 00:10:14.966 Could not set queue depth (nvme0n4) 00:10:14.966 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.966 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.966 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.966 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.966 fio-3.35 00:10:14.966 Starting 4 threads 00:10:18.279 20:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:18.279 fio: pid=77119, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.279 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=32100352, buflen=4096 00:10:18.279 20:51:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:18.538 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33677312, buflen=4096 00:10:18.538 fio: pid=77118, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.538 20:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.538 20:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:18.796 fio: pid=77116, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:18.796 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=36233216, buflen=4096 00:10:18.796 20:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.796 20:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:19.055 fio: pid=77117, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:19.055 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=55767040, buflen=4096 00:10:19.055 00:10:19.055 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77116: Sun Aug 11 20:51:29 2024 00:10:19.055 read: IOPS=2502, BW=9.77MiB/s (10.2MB/s)(34.6MiB/3535msec) 00:10:19.055 slat (usec): min=7, max=10954, avg=26.31, stdev=199.72 00:10:19.055 clat (usec): min=135, max=99215, avg=370.64, stdev=1056.67 00:10:19.055 lat (usec): min=149, max=99245, avg=396.95, stdev=1075.60 00:10:19.055 clat percentiles (usec): 00:10:19.055 | 1.00th=[ 165], 5.00th=[ 210], 10.00th=[ 258], 20.00th=[ 289], 00:10:19.055 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 363], 00:10:19.055 | 70.00th=[ 383], 80.00th=[ 416], 90.00th=[ 486], 95.00th=[ 553], 00:10:19.055 | 99.00th=[ 709], 99.50th=[ 848], 99.90th=[ 1004], 99.95th=[ 1106], 00:10:19.055 | 99.99th=[99091] 00:10:19.055 bw ( KiB/s): min= 7216, max=11840, per=24.54%, avg=9900.00, stdev=1609.43, samples=6 00:10:19.055 iops : min= 1804, max= 2960, avg=2475.00, stdev=402.36, samples=6 00:10:19.055 lat (usec) : 250=8.75%, 500=82.34%, 750=8.17%, 1000=0.62% 00:10:19.055 lat (msec) : 2=0.06%, 4=0.03%, 100=0.01% 00:10:19.055 cpu : usr=1.56%, sys=4.61%, ctx=8853, majf=0, minf=1 00:10:19.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.055 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.055 issued rwts: total=8847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.055 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77117: Sun Aug 11 20:51:29 2024 00:10:19.055 read: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(53.2MiB/3819msec) 00:10:19.055 slat (usec): min=7, max=10801, avg=18.14, stdev=176.22 00:10:19.055 clat (usec): min=136, max=7515, avg=260.97, stdev=130.04 00:10:19.055 lat (usec): min=148, max=11054, avg=279.11, stdev=220.18 00:10:19.055 clat percentiles (usec): 00:10:19.055 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 190], 00:10:19.055 | 30.00th=[ 208], 40.00th=[ 229], 50.00th=[ 247], 60.00th=[ 269], 00:10:19.055 | 70.00th=[ 289], 80.00th=[ 318], 90.00th=[ 351], 95.00th=[ 383], 00:10:19.055 | 99.00th=[ 478], 99.50th=[ 529], 99.90th=[ 1532], 99.95th=[ 2900], 00:10:19.055 | 99.99th=[ 4228] 00:10:19.055 bw ( KiB/s): min=10968, max=18592, per=34.56%, avg=13945.57, stdev=2818.37, samples=7 00:10:19.055 iops : min= 2742, max= 4648, avg=3486.29, stdev=704.55, samples=7 00:10:19.055 lat (usec) : 250=51.40%, 500=47.88%, 750=0.48%, 1000=0.07% 00:10:19.055 lat (msec) : 2=0.10%, 4=0.04%, 10=0.03% 00:10:19.055 cpu : usr=1.02%, sys=4.77%, ctx=13624, majf=0, minf=1 00:10:19.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.055 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.055 issued rwts: total=13616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.055 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77118: Sun Aug 11 20:51:29 2024 00:10:19.055 read: IOPS=2513, BW=9.82MiB/s (10.3MB/s)(32.1MiB/3271msec) 00:10:19.055 slat (usec): min=9, max=15634, avg=24.49, stdev=190.18 00:10:19.055 clat (usec): min=145, max=2937, avg=370.91, stdev=98.57 00:10:19.055 lat (usec): min=162, max=16087, avg=395.40, stdev=215.22 00:10:19.055 clat percentiles (usec): 00:10:19.055 | 1.00th=[ 241], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 302], 00:10:19.055 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 371], 00:10:19.055 | 70.00th=[ 392], 80.00th=[ 424], 90.00th=[ 486], 95.00th=[ 553], 00:10:19.055 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 1020], 99.95th=[ 1090], 00:10:19.055 | 99.99th=[ 2933] 00:10:19.055 bw ( KiB/s): min= 7736, max=11872, per=24.88%, avg=10037.33, stdev=1445.43, samples=6 00:10:19.055 iops : min= 1934, max= 2968, avg=2509.33, stdev=361.36, samples=6 00:10:19.055 lat (usec) : 250=1.59%, 500=89.89%, 750=8.18%, 1000=0.21% 00:10:19.055 lat (msec) : 2=0.07%, 4=0.04% 00:10:19.055 cpu : usr=0.83%, sys=5.02%, ctx=8226, majf=0, minf=2 00:10:19.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.055 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.055 issued rwts: total=8223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.055 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=77119: Sun Aug 11 20:51:29 2024 00:10:19.055 read: IOPS=2654, BW=10.4MiB/s (10.9MB/s)(30.6MiB/2953msec) 00:10:19.055 slat (usec): min=7, max=292, avg=15.56, stdev= 8.06 00:10:19.055 clat (usec): min=167, max=4081, avg=359.64, stdev=110.96 00:10:19.055 lat (usec): min=183, max=4129, avg=375.20, stdev=113.86 00:10:19.055 clat percentiles (usec): 00:10:19.055 | 1.00th=[ 210], 5.00th=[ 247], 10.00th=[ 260], 20.00th=[ 281], 00:10:19.055 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 363], 00:10:19.055 | 70.00th=[ 388], 80.00th=[ 416], 90.00th=[ 486], 95.00th=[ 545], 00:10:19.055 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 1037], 99.95th=[ 2147], 00:10:19.055 | 99.99th=[ 4080] 00:10:19.056 bw ( KiB/s): min= 9344, max=13032, per=27.88%, avg=11248.00, stdev=1370.02, samples=5 00:10:19.056 iops : min= 2336, max= 3258, avg=2812.00, stdev=342.51, samples=5 00:10:19.056 lat (usec) : 250=6.44%, 500=84.68%, 750=8.54%, 1000=0.20% 00:10:19.056 lat (msec) : 2=0.06%, 4=0.05%, 10=0.01% 00:10:19.056 cpu : usr=0.78%, sys=4.03%, ctx=7846, majf=0, minf=2 00:10:19.056 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.056 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.056 issued rwts: total=7838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.056 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.056 00:10:19.056 Run status group 0 (all jobs): 00:10:19.056 READ: bw=39.4MiB/s (41.3MB/s), 9.77MiB/s-13.9MiB/s (10.2MB/s-14.6MB/s), io=150MiB (158MB), run=2953-3819msec 00:10:19.056 00:10:19.056 Disk stats (read/write): 00:10:19.056 nvme0n1: ios=8559/0, merge=0/0, ticks=3147/0, in_queue=3147, util=95.42% 00:10:19.056 nvme0n2: ios=12617/0, merge=0/0, ticks=3382/0, in_queue=3382, util=95.42% 00:10:19.056 nvme0n3: ios=7796/0, merge=0/0, ticks=2916/0, in_queue=2916, util=96.08% 00:10:19.056 nvme0n4: ios=7664/0, merge=0/0, ticks=2609/0, in_queue=2609, util=96.62% 00:10:19.056 20:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.056 20:51:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:19.314 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.314 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:19.572 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.572 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:19.831 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.831 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:20.090 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.090 20:51:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 77070 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.348 nvmf hotplug test: fio failed as expected 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:20.348 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # nvmfcleanup 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.915 rmmod nvme_tcp 00:10:20.915 rmmod nvme_fabrics 00:10:20.915 rmmod nvme_keyring 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # '[' -n 76696 ']' 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # killprocess 76696 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 76696 ']' 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 76696 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76696 00:10:20.915 killing process with pid 76696 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76696' 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 76696 00:10:20.915 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 76696 00:10:21.173 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:10:21.173 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:10:21.173 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:10:21.173 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # iptr 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@783 -- # iptables-save 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@783 -- # iptables-restore 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.174 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # remove_spdk_ns 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # return 0 00:10:21.433 00:10:21.433 real 0m19.590s 00:10:21.433 user 1m14.540s 00:10:21.433 sys 0m8.925s 00:10:21.433 ************************************ 00:10:21.433 END TEST nvmf_fio_target 00:10:21.433 ************************************ 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:21.433 20:51:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.433 ************************************ 00:10:21.433 START TEST nvmf_bdevio 00:10:21.433 ************************************ 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.433 * Looking for test storage... 00:10:21.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.433 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # prepare_net_devs 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # local -g is_hw=no 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # remove_spdk_ns 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # nvmf_veth_init 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:10:21.434 Cannot find device "nvmf_init_br" 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:10:21.434 Cannot find device "nvmf_init_br2" 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:21.434 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:10:21.693 Cannot find device "nvmf_tgt_br" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.693 Cannot find device "nvmf_tgt_br2" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:10:21.693 Cannot find device "nvmf_init_br" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:10:21.693 Cannot find device "nvmf_init_br2" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:10:21.693 Cannot find device "nvmf_tgt_br" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:10:21.693 Cannot find device "nvmf_tgt_br2" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:10:21.693 Cannot find device "nvmf_br" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:10:21.693 Cannot find device "nvmf_init_if" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:10:21.693 Cannot find device "nvmf_init_if2" 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:10:21.693 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:10:21.694 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:10:21.694 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.694 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.694 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.694 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:10:21.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:21.953 00:10:21.953 --- 10.0.0.3 ping statistics --- 00:10:21.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.953 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:10:21.953 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:21.953 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:10:21.953 00:10:21.953 --- 10.0.0.4 ping statistics --- 00:10:21.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.953 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:10:21.953 00:10:21.953 --- 10.0.0.1 ping statistics --- 00:10:21.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.953 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:21.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:10:21.953 00:10:21.953 --- 10.0.0.2 ping statistics --- 00:10:21.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.953 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@453 -- # return 0 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # nvmfpid=77434 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # waitforlisten 77434 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 77434 ']' 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:21.953 20:51:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.953 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:10:21.953 [2024-08-11 20:51:32.647056] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:10:21.953 [2024-08-11 20:51:32.647178] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.212 [2024-08-11 20:51:32.773966] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.212 [2024-08-11 20:51:32.876454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.212 [2024-08-11 20:51:32.876541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.212 [2024-08-11 20:51:32.876552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.212 [2024-08-11 20:51:32.876561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.212 [2024-08-11 20:51:32.876568] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.212 [2024-08-11 20:51:32.876776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.212 [2024-08-11 20:51:32.877261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.212 [2024-08-11 20:51:32.877387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.212 [2024-08-11 20:51:32.877397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.212 [2024-08-11 20:51:32.951897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 [2024-08-11 20:51:33.653586] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 Malloc0 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.148 [2024-08-11 20:51:33.720437] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:23.148 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@552 -- # config=() 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@552 -- # local subsystem config 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:10:23.149 { 00:10:23.149 "params": { 00:10:23.149 "name": "Nvme$subsystem", 00:10:23.149 "trtype": "$TEST_TRANSPORT", 00:10:23.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:23.149 "adrfam": "ipv4", 00:10:23.149 "trsvcid": "$NVMF_PORT", 00:10:23.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:23.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:23.149 "hdgst": ${hdgst:-false}, 00:10:23.149 "ddgst": ${ddgst:-false} 00:10:23.149 }, 00:10:23.149 "method": "bdev_nvme_attach_controller" 00:10:23.149 } 00:10:23.149 EOF 00:10:23.149 )") 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@574 -- # cat 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@576 -- # jq . 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@577 -- # IFS=, 00:10:23.149 20:51:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:10:23.149 "params": { 00:10:23.149 "name": "Nvme1", 00:10:23.149 "trtype": "tcp", 00:10:23.149 "traddr": "10.0.0.3", 00:10:23.149 "adrfam": "ipv4", 00:10:23.149 "trsvcid": "4420", 00:10:23.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:23.149 "hdgst": false, 00:10:23.149 "ddgst": false 00:10:23.149 }, 00:10:23.149 "method": "bdev_nvme_attach_controller" 00:10:23.149 }' 00:10:23.149 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:10:23.149 [2024-08-11 20:51:33.781876] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:10:23.149 [2024-08-11 20:51:33.781956] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77470 ] 00:10:23.149 [2024-08-11 20:51:33.921012] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.408 [2024-08-11 20:51:33.982251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.408 [2024-08-11 20:51:33.982414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.408 [2024-08-11 20:51:33.982417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.408 [2024-08-11 20:51:34.045563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:23.408 I/O targets: 00:10:23.408 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:23.408 00:10:23.408 00:10:23.408 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.408 http://cunit.sourceforge.net/ 00:10:23.408 00:10:23.408 00:10:23.408 Suite: bdevio tests on: Nvme1n1 00:10:23.408 Test: blockdev write read block ...passed 00:10:23.408 Test: blockdev write zeroes read block ...passed 00:10:23.408 Test: blockdev write zeroes read no split ...passed 00:10:23.408 Test: blockdev write zeroes read split ...passed 00:10:23.408 Test: blockdev write zeroes read split partial ...passed 00:10:23.408 Test: blockdev reset ...[2024-08-11 20:51:34.185531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:23.667 [2024-08-11 20:51:34.185683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2156880 (9): Bad file descriptor 00:10:23.667 [2024-08-11 20:51:34.200984] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:23.667 passed 00:10:23.667 Test: blockdev write read 8 blocks ...passed 00:10:23.667 Test: blockdev write read size > 128k ...passed 00:10:23.667 Test: blockdev write read invalid size ...passed 00:10:23.667 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.667 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.667 Test: blockdev write read max offset ...passed 00:10:23.667 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.667 Test: blockdev writev readv 8 blocks ...passed 00:10:23.667 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.667 Test: blockdev writev readv block ...passed 00:10:23.667 Test: blockdev writev readv size > 128k ...passed 00:10:23.667 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.667 Test: blockdev comparev and writev ...[2024-08-11 20:51:34.209914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.667 [2024-08-11 20:51:34.209976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.667 [2024-08-11 20:51:34.210003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.667 [2024-08-11 20:51:34.210016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.667 [2024-08-11 20:51:34.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.667 [2024-08-11 20:51:34.210933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.667 [2024-08-11 20:51:34.210955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.667 [2024-08-11 20:51:34.210977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.667 [2024-08-11 20:51:34.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.667 [2024-08-11 20:51:34.211615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.667 [2024-08-11 20:51:34.211648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.667 [2024-08-11 20:51:34.211661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.667 [2024-08-11 20:51:34.212133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.668 [2024-08-11 20:51:34.212170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.668 [2024-08-11 20:51:34.212191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.668 [2024-08-11 20:51:34.212204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.668 passed 00:10:23.668 Test: blockdev nvme passthru rw ...passed 00:10:23.668 Test: blockdev nvme passthru vendor specific ...[2024-08-11 20:51:34.213126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.668 [2024-08-11 20:51:34.213160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:23.668 [2024-08-11 20:51:34.213273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.668 [2024-08-11 20:51:34.213293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:23.668 [2024-08-11 20:51:34.213417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.668 [2024-08-11 20:51:34.213448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:23.668 [2024-08-11 20:51:34.213557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.668 [2024-08-11 20:51:34.213585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:23.668 passed 00:10:23.668 Test: blockdev nvme admin passthru ...passed 00:10:23.668 Test: blockdev copy ...passed 00:10:23.668 00:10:23.668 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.668 suites 1 1 n/a 0 0 00:10:23.668 tests 23 23 23 0 0 00:10:23.668 asserts 152 152 152 0 n/a 00:10:23.668 00:10:23.668 Elapsed time = 0.145 seconds 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # nvmfcleanup 00:10:23.668 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.927 rmmod nvme_tcp 00:10:23.927 rmmod nvme_fabrics 00:10:23.927 rmmod nvme_keyring 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # '[' -n 77434 ']' 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # killprocess 77434 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 77434 ']' 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 77434 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77434 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77434' 00:10:23.927 killing process with pid 77434 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 77434 00:10:23.927 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 77434 00:10:24.185 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:10:24.185 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:10:24.185 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:10:24.185 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # iptr 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@783 -- # iptables-save 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@783 -- # iptables-restore 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:10:24.186 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:10:24.444 20:51:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # remove_spdk_ns 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # return 0 00:10:24.444 00:10:24.444 real 0m3.084s 00:10:24.444 user 0m9.191s 00:10:24.444 sys 0m0.921s 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.444 ************************************ 00:10:24.444 END TEST nvmf_bdevio 00:10:24.444 ************************************ 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.444 20:51:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:24.444 ************************************ 00:10:24.444 00:10:24.444 real 2m32.566s 00:10:24.444 user 6m42.637s 00:10:24.445 sys 0m52.432s 00:10:24.445 20:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:24.445 20:51:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.445 END TEST nvmf_target_core 00:10:24.445 ************************************ 00:10:24.445 20:51:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:24.445 20:51:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:24.445 20:51:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.445 20:51:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.704 ************************************ 00:10:24.704 START TEST nvmf_target_extra 00:10:24.704 ************************************ 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:24.704 * Looking for test storage... 00:10:24.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.704 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.705 ************************************ 00:10:24.705 START TEST nvmf_auth_target 00:10:24.705 ************************************ 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:24.705 * Looking for test storage... 00:10:24.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:10:24.705 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # prepare_net_devs 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # local -g is_hw=no 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # remove_spdk_ns 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # nvmf_veth_init 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:10:24.706 Cannot find device "nvmf_init_br" 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:24.706 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:10:24.965 Cannot find device "nvmf_init_br2" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:10:24.965 Cannot find device "nvmf_tgt_br" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.965 Cannot find device "nvmf_tgt_br2" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:10:24.965 Cannot find device "nvmf_init_br" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:10:24.965 Cannot find device "nvmf_init_br2" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:10:24.965 Cannot find device "nvmf_tgt_br" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:10:24.965 Cannot find device "nvmf_tgt_br2" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:10:24.965 Cannot find device "nvmf_br" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:10:24.965 Cannot find device "nvmf_init_if" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:10:24.965 Cannot find device "nvmf_init_if2" 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:24.965 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:25.224 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:10:25.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:25.225 00:10:25.225 --- 10.0.0.3 ping statistics --- 00:10:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.225 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:10:25.225 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:25.225 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:10:25.225 00:10:25.225 --- 10.0.0.4 ping statistics --- 00:10:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.225 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:25.225 00:10:25.225 --- 10.0.0.1 ping statistics --- 00:10:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.225 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:25.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:25.225 00:10:25.225 --- 10.0.0.2 ping statistics --- 00:10:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.225 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@453 -- # return 0 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # nvmfpid=77743 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # waitforlisten 77743 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 77743 ']' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:25.225 20:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.818 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:25.818 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:10:25.818 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:10:25.818 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77762 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=null 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=48 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=50b93b32e9fac48aed1b1d34191a856f47560599cf0609c2 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-null.XXX 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-null.N3a 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 50b93b32e9fac48aed1b1d34191a856f47560599cf0609c2 0 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 50b93b32e9fac48aed1b1d34191a856f47560599cf0609c2 0 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=50b93b32e9fac48aed1b1d34191a856f47560599cf0609c2 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=0 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-null.N3a 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-null.N3a 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.N3a 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=sha512 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=64 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=42ef141f2e76067340c242d4a4a74f1f11e0cf083f7367735448a4957bfe2e5b 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha512.XXX 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha512.5ad 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 42ef141f2e76067340c242d4a4a74f1f11e0cf083f7367735448a4957bfe2e5b 3 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 42ef141f2e76067340c242d4a4a74f1f11e0cf083f7367735448a4957bfe2e5b 3 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=42ef141f2e76067340c242d4a4a74f1f11e0cf083f7367735448a4957bfe2e5b 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=3 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha512.5ad 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha512.5ad 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.5ad 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=sha256 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=32 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=30a4a1ab8a06f21e079343dabd188789 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha256.XXX 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha256.kdb 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 30a4a1ab8a06f21e079343dabd188789 1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 30a4a1ab8a06f21e079343dabd188789 1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=30a4a1ab8a06f21e079343dabd188789 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha256.kdb 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha256.kdb 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.kdb 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=sha384 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=48 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=91de0ed3db3da31adbb839c8cd3d5e8305e0effbb848460e 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha384.XXX 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha384.bqa 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 91de0ed3db3da31adbb839c8cd3d5e8305e0effbb848460e 2 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 91de0ed3db3da31adbb839c8cd3d5e8305e0effbb848460e 2 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=91de0ed3db3da31adbb839c8cd3d5e8305e0effbb848460e 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=2 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha384.bqa 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha384.bqa 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.bqa 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=sha384 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=48 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.819 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=703c0f7af8a6630d33a4360517fc6fc1ace1f94d03ae1357 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha384.XXX 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha384.WHj 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 703c0f7af8a6630d33a4360517fc6fc1ace1f94d03ae1357 2 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 703c0f7af8a6630d33a4360517fc6fc1ace1f94d03ae1357 2 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=703c0f7af8a6630d33a4360517fc6fc1ace1f94d03ae1357 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=2 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha384.WHj 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha384.WHj 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.WHj 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=sha256 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=32 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=04b89e6449fd8584794f1332401d2416 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha256.XXX 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha256.7w4 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 04b89e6449fd8584794f1332401d2416 1 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 04b89e6449fd8584794f1332401d2416 1 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=04b89e6449fd8584794f1332401d2416 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=1 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha256.7w4 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha256.7w4 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.7w4 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # local digest len file key 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@744 -- # local -A digests 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # digest=sha512 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@746 -- # len=64 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # key=345e2d08c7ec6cc09e6193511d3ff91eb15bc9f3419edf043027a6473be14e65 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha512.XXX 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha512.C8l 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # format_dhchap_key 345e2d08c7ec6cc09e6193511d3ff91eb15bc9f3419edf043027a6473be14e65 3 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@739 -- # format_key DHHC-1 345e2d08c7ec6cc09e6193511d3ff91eb15bc9f3419edf043027a6473be14e65 3 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@722 -- # local prefix key digest 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # key=345e2d08c7ec6cc09e6193511d3ff91eb15bc9f3419edf043027a6473be14e65 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digest=3 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@725 -- # python - 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha512.C8l 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha512.C8l 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.C8l 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77743 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 77743 ']' 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:26.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:26.079 20:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77762 /var/tmp/host.sock 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 77762 ']' 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:26.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:26.338 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.596 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:26.596 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:10:26.596 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:26.596 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:26.596 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.855 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:26.855 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:26.855 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.N3a 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.N3a 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.N3a 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.5ad ]] 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5ad 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:26.856 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5ad 00:10:27.115 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5ad 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kdb 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kdb 00:10:27.373 20:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kdb 00:10:27.373 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.bqa ]] 00:10:27.373 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bqa 00:10:27.373 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:27.373 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.632 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:27.632 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bqa 00:10:27.632 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bqa 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WHj 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.WHj 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.WHj 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.7w4 ]] 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:10:27.891 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7w4 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.C8l 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.C8l 00:10:28.150 20:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.C8l 00:10:28.408 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:28.409 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:28.409 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.409 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.409 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.409 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.975 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:28.975 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.975 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.976 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.234 00:10:29.234 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.234 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:29.234 20:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.492 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.492 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.492 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:29.492 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:29.493 { 00:10:29.493 "cntlid": 1, 00:10:29.493 "qid": 0, 00:10:29.493 "state": "enabled", 00:10:29.493 "thread": "nvmf_tgt_poll_group_000", 00:10:29.493 "listen_address": { 00:10:29.493 "trtype": "TCP", 00:10:29.493 "adrfam": "IPv4", 00:10:29.493 "traddr": "10.0.0.3", 00:10:29.493 "trsvcid": "4420" 00:10:29.493 }, 00:10:29.493 "peer_address": { 00:10:29.493 "trtype": "TCP", 00:10:29.493 "adrfam": "IPv4", 00:10:29.493 "traddr": "10.0.0.1", 00:10:29.493 "trsvcid": "52040" 00:10:29.493 }, 00:10:29.493 "auth": { 00:10:29.493 "state": "completed", 00:10:29.493 "digest": "sha256", 00:10:29.493 "dhgroup": "null" 00:10:29.493 } 00:10:29.493 } 00:10:29.493 ]' 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.493 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.060 20:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.251 20:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.251 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:34.252 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.252 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.818 00:10:34.818 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:34.818 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.818 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.077 { 00:10:35.077 "cntlid": 3, 00:10:35.077 "qid": 0, 00:10:35.077 "state": "enabled", 00:10:35.077 "thread": "nvmf_tgt_poll_group_000", 00:10:35.077 "listen_address": { 00:10:35.077 "trtype": "TCP", 00:10:35.077 "adrfam": "IPv4", 00:10:35.077 "traddr": "10.0.0.3", 00:10:35.077 "trsvcid": "4420" 00:10:35.077 }, 00:10:35.077 "peer_address": { 00:10:35.077 "trtype": "TCP", 00:10:35.077 "adrfam": "IPv4", 00:10:35.077 "traddr": "10.0.0.1", 00:10:35.077 "trsvcid": "52066" 00:10:35.077 }, 00:10:35.077 "auth": { 00:10:35.077 "state": "completed", 00:10:35.077 "digest": "sha256", 00:10:35.077 "dhgroup": "null" 00:10:35.077 } 00:10:35.077 } 00:10:35.077 ]' 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.077 20:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.645 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.212 20:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.470 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.729 00:10:36.729 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.729 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.729 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:36.987 { 00:10:36.987 "cntlid": 5, 00:10:36.987 "qid": 0, 00:10:36.987 "state": "enabled", 00:10:36.987 "thread": "nvmf_tgt_poll_group_000", 00:10:36.987 "listen_address": { 00:10:36.987 "trtype": "TCP", 00:10:36.987 "adrfam": "IPv4", 00:10:36.987 "traddr": "10.0.0.3", 00:10:36.987 "trsvcid": "4420" 00:10:36.987 }, 00:10:36.987 "peer_address": { 00:10:36.987 "trtype": "TCP", 00:10:36.987 "adrfam": "IPv4", 00:10:36.987 "traddr": "10.0.0.1", 00:10:36.987 "trsvcid": "48718" 00:10:36.987 }, 00:10:36.987 "auth": { 00:10:36.987 "state": "completed", 00:10:36.987 "digest": "sha256", 00:10:36.987 "dhgroup": "null" 00:10:36.987 } 00:10:36.987 } 00:10:36.987 ]' 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.987 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.246 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:37.246 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.246 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.246 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.246 20:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.505 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.071 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:38.330 20:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.330 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:38.330 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.330 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.588 00:10:38.588 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.588 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.588 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:38.847 { 00:10:38.847 "cntlid": 7, 00:10:38.847 "qid": 0, 00:10:38.847 "state": "enabled", 00:10:38.847 "thread": "nvmf_tgt_poll_group_000", 00:10:38.847 "listen_address": { 00:10:38.847 "trtype": "TCP", 00:10:38.847 "adrfam": "IPv4", 00:10:38.847 "traddr": "10.0.0.3", 00:10:38.847 "trsvcid": "4420" 00:10:38.847 }, 00:10:38.847 "peer_address": { 00:10:38.847 "trtype": "TCP", 00:10:38.847 "adrfam": "IPv4", 00:10:38.847 "traddr": "10.0.0.1", 00:10:38.847 "trsvcid": "48744" 00:10:38.847 }, 00:10:38.847 "auth": { 00:10:38.847 "state": "completed", 00:10:38.847 "digest": "sha256", 00:10:38.847 "dhgroup": "null" 00:10:38.847 } 00:10:38.847 } 00:10:38.847 ]' 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.847 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.105 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:39.105 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.105 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.105 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.105 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.364 20:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:10:39.931 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.932 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.191 20:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.450 00:10:40.450 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.450 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.450 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.711 { 00:10:40.711 "cntlid": 9, 00:10:40.711 "qid": 0, 00:10:40.711 "state": "enabled", 00:10:40.711 "thread": "nvmf_tgt_poll_group_000", 00:10:40.711 "listen_address": { 00:10:40.711 "trtype": "TCP", 00:10:40.711 "adrfam": "IPv4", 00:10:40.711 "traddr": "10.0.0.3", 00:10:40.711 "trsvcid": "4420" 00:10:40.711 }, 00:10:40.711 "peer_address": { 00:10:40.711 "trtype": "TCP", 00:10:40.711 "adrfam": "IPv4", 00:10:40.711 "traddr": "10.0.0.1", 00:10:40.711 "trsvcid": "48782" 00:10:40.711 }, 00:10:40.711 "auth": { 00:10:40.711 "state": "completed", 00:10:40.711 "digest": "sha256", 00:10:40.711 "dhgroup": "ffdhe2048" 00:10:40.711 } 00:10:40.711 } 00:10:40.711 ]' 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:40.711 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.974 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.974 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.974 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.975 20:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.911 20:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.478 00:10:42.478 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.478 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.478 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.735 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.736 { 00:10:42.736 "cntlid": 11, 00:10:42.736 "qid": 0, 00:10:42.736 "state": "enabled", 00:10:42.736 "thread": "nvmf_tgt_poll_group_000", 00:10:42.736 "listen_address": { 00:10:42.736 "trtype": "TCP", 00:10:42.736 "adrfam": "IPv4", 00:10:42.736 "traddr": "10.0.0.3", 00:10:42.736 "trsvcid": "4420" 00:10:42.736 }, 00:10:42.736 "peer_address": { 00:10:42.736 "trtype": "TCP", 00:10:42.736 "adrfam": "IPv4", 00:10:42.736 "traddr": "10.0.0.1", 00:10:42.736 "trsvcid": "48804" 00:10:42.736 }, 00:10:42.736 "auth": { 00:10:42.736 "state": "completed", 00:10:42.736 "digest": "sha256", 00:10:42.736 "dhgroup": "ffdhe2048" 00:10:42.736 } 00:10:42.736 } 00:10:42.736 ]' 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.736 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.993 20:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:43.929 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:43.929 20:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.495 00:10:44.495 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.495 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.495 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.753 { 00:10:44.753 "cntlid": 13, 00:10:44.753 "qid": 0, 00:10:44.753 "state": "enabled", 00:10:44.753 "thread": "nvmf_tgt_poll_group_000", 00:10:44.753 "listen_address": { 00:10:44.753 "trtype": "TCP", 00:10:44.753 "adrfam": "IPv4", 00:10:44.753 "traddr": "10.0.0.3", 00:10:44.753 "trsvcid": "4420" 00:10:44.753 }, 00:10:44.753 "peer_address": { 00:10:44.753 "trtype": "TCP", 00:10:44.753 "adrfam": "IPv4", 00:10:44.753 "traddr": "10.0.0.1", 00:10:44.753 "trsvcid": "48834" 00:10:44.753 }, 00:10:44.753 "auth": { 00:10:44.753 "state": "completed", 00:10:44.753 "digest": "sha256", 00:10:44.753 "dhgroup": "ffdhe2048" 00:10:44.753 } 00:10:44.753 } 00:10:44.753 ]' 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.753 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.754 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.012 20:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:10:45.946 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:45.947 20:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.513 00:10:46.513 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.513 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.513 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.772 { 00:10:46.772 "cntlid": 15, 00:10:46.772 "qid": 0, 00:10:46.772 "state": "enabled", 00:10:46.772 "thread": "nvmf_tgt_poll_group_000", 00:10:46.772 "listen_address": { 00:10:46.772 "trtype": "TCP", 00:10:46.772 "adrfam": "IPv4", 00:10:46.772 "traddr": "10.0.0.3", 00:10:46.772 "trsvcid": "4420" 00:10:46.772 }, 00:10:46.772 "peer_address": { 00:10:46.772 "trtype": "TCP", 00:10:46.772 "adrfam": "IPv4", 00:10:46.772 "traddr": "10.0.0.1", 00:10:46.772 "trsvcid": "48864" 00:10:46.772 }, 00:10:46.772 "auth": { 00:10:46.772 "state": "completed", 00:10:46.772 "digest": "sha256", 00:10:46.772 "dhgroup": "ffdhe2048" 00:10:46.772 } 00:10:46.772 } 00:10:46.772 ]' 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.772 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.031 20:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.597 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:47.856 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.422 00:10:48.422 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.422 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.422 20:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.680 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.681 { 00:10:48.681 "cntlid": 17, 00:10:48.681 "qid": 0, 00:10:48.681 "state": "enabled", 00:10:48.681 "thread": "nvmf_tgt_poll_group_000", 00:10:48.681 "listen_address": { 00:10:48.681 "trtype": "TCP", 00:10:48.681 "adrfam": "IPv4", 00:10:48.681 "traddr": "10.0.0.3", 00:10:48.681 "trsvcid": "4420" 00:10:48.681 }, 00:10:48.681 "peer_address": { 00:10:48.681 "trtype": "TCP", 00:10:48.681 "adrfam": "IPv4", 00:10:48.681 "traddr": "10.0.0.1", 00:10:48.681 "trsvcid": "39578" 00:10:48.681 }, 00:10:48.681 "auth": { 00:10:48.681 "state": "completed", 00:10:48.681 "digest": "sha256", 00:10:48.681 "dhgroup": "ffdhe3072" 00:10:48.681 } 00:10:48.681 } 00:10:48.681 ]' 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.681 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.939 20:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.505 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.763 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.021 00:10:50.021 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.021 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.021 20:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.280 { 00:10:50.280 "cntlid": 19, 00:10:50.280 "qid": 0, 00:10:50.280 "state": "enabled", 00:10:50.280 "thread": "nvmf_tgt_poll_group_000", 00:10:50.280 "listen_address": { 00:10:50.280 "trtype": "TCP", 00:10:50.280 "adrfam": "IPv4", 00:10:50.280 "traddr": "10.0.0.3", 00:10:50.280 "trsvcid": "4420" 00:10:50.280 }, 00:10:50.280 "peer_address": { 00:10:50.280 "trtype": "TCP", 00:10:50.280 "adrfam": "IPv4", 00:10:50.280 "traddr": "10.0.0.1", 00:10:50.280 "trsvcid": "39606" 00:10:50.280 }, 00:10:50.280 "auth": { 00:10:50.280 "state": "completed", 00:10:50.280 "digest": "sha256", 00:10:50.280 "dhgroup": "ffdhe3072" 00:10:50.280 } 00:10:50.280 } 00:10:50.280 ]' 00:10:50.280 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.538 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.796 20:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.362 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.940 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.211 00:10:52.211 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.211 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.211 20:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.469 { 00:10:52.469 "cntlid": 21, 00:10:52.469 "qid": 0, 00:10:52.469 "state": "enabled", 00:10:52.469 "thread": "nvmf_tgt_poll_group_000", 00:10:52.469 "listen_address": { 00:10:52.469 "trtype": "TCP", 00:10:52.469 "adrfam": "IPv4", 00:10:52.469 "traddr": "10.0.0.3", 00:10:52.469 "trsvcid": "4420" 00:10:52.469 }, 00:10:52.469 "peer_address": { 00:10:52.469 "trtype": "TCP", 00:10:52.469 "adrfam": "IPv4", 00:10:52.469 "traddr": "10.0.0.1", 00:10:52.469 "trsvcid": "39630" 00:10:52.469 }, 00:10:52.469 "auth": { 00:10:52.469 "state": "completed", 00:10:52.469 "digest": "sha256", 00:10:52.469 "dhgroup": "ffdhe3072" 00:10:52.469 } 00:10:52.469 } 00:10:52.469 ]' 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.469 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.470 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:52.470 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.470 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.470 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.470 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.727 20:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:53.662 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:53.921 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.179 00:10:54.179 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.179 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.179 20:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.438 { 00:10:54.438 "cntlid": 23, 00:10:54.438 "qid": 0, 00:10:54.438 "state": "enabled", 00:10:54.438 "thread": "nvmf_tgt_poll_group_000", 00:10:54.438 "listen_address": { 00:10:54.438 "trtype": "TCP", 00:10:54.438 "adrfam": "IPv4", 00:10:54.438 "traddr": "10.0.0.3", 00:10:54.438 "trsvcid": "4420" 00:10:54.438 }, 00:10:54.438 "peer_address": { 00:10:54.438 "trtype": "TCP", 00:10:54.438 "adrfam": "IPv4", 00:10:54.438 "traddr": "10.0.0.1", 00:10:54.438 "trsvcid": "39660" 00:10:54.438 }, 00:10:54.438 "auth": { 00:10:54.438 "state": "completed", 00:10:54.438 "digest": "sha256", 00:10:54.438 "dhgroup": "ffdhe3072" 00:10:54.438 } 00:10:54.438 } 00:10:54.438 ]' 00:10:54.438 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.439 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.006 20:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:10:55.264 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.524 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.783 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.042 00:10:56.042 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.042 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.042 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.300 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.300 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.300 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:56.300 20:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.300 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:56.300 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.300 { 00:10:56.300 "cntlid": 25, 00:10:56.300 "qid": 0, 00:10:56.300 "state": "enabled", 00:10:56.300 "thread": "nvmf_tgt_poll_group_000", 00:10:56.300 "listen_address": { 00:10:56.300 "trtype": "TCP", 00:10:56.300 "adrfam": "IPv4", 00:10:56.300 "traddr": "10.0.0.3", 00:10:56.300 "trsvcid": "4420" 00:10:56.300 }, 00:10:56.300 "peer_address": { 00:10:56.300 "trtype": "TCP", 00:10:56.300 "adrfam": "IPv4", 00:10:56.300 "traddr": "10.0.0.1", 00:10:56.300 "trsvcid": "39698" 00:10:56.300 }, 00:10:56.301 "auth": { 00:10:56.301 "state": "completed", 00:10:56.301 "digest": "sha256", 00:10:56.301 "dhgroup": "ffdhe4096" 00:10:56.301 } 00:10:56.301 } 00:10:56.301 ]' 00:10:56.301 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.301 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.301 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.559 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.559 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.559 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.559 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.559 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.818 20:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.386 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:57.644 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.645 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.212 00:10:58.212 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.212 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.212 20:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.471 { 00:10:58.471 "cntlid": 27, 00:10:58.471 "qid": 0, 00:10:58.471 "state": "enabled", 00:10:58.471 "thread": "nvmf_tgt_poll_group_000", 00:10:58.471 "listen_address": { 00:10:58.471 "trtype": "TCP", 00:10:58.471 "adrfam": "IPv4", 00:10:58.471 "traddr": "10.0.0.3", 00:10:58.471 "trsvcid": "4420" 00:10:58.471 }, 00:10:58.471 "peer_address": { 00:10:58.471 "trtype": "TCP", 00:10:58.471 "adrfam": "IPv4", 00:10:58.471 "traddr": "10.0.0.1", 00:10:58.471 "trsvcid": "36714" 00:10:58.471 }, 00:10:58.471 "auth": { 00:10:58.471 "state": "completed", 00:10:58.471 "digest": "sha256", 00:10:58.471 "dhgroup": "ffdhe4096" 00:10:58.471 } 00:10:58.471 } 00:10:58.471 ]' 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.471 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.730 20:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:10:59.665 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.666 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:10:59.666 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.666 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.233 00:11:00.233 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.233 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.233 20:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.492 { 00:11:00.492 "cntlid": 29, 00:11:00.492 "qid": 0, 00:11:00.492 "state": "enabled", 00:11:00.492 "thread": "nvmf_tgt_poll_group_000", 00:11:00.492 "listen_address": { 00:11:00.492 "trtype": "TCP", 00:11:00.492 "adrfam": "IPv4", 00:11:00.492 "traddr": "10.0.0.3", 00:11:00.492 "trsvcid": "4420" 00:11:00.492 }, 00:11:00.492 "peer_address": { 00:11:00.492 "trtype": "TCP", 00:11:00.492 "adrfam": "IPv4", 00:11:00.492 "traddr": "10.0.0.1", 00:11:00.492 "trsvcid": "36748" 00:11:00.492 }, 00:11:00.492 "auth": { 00:11:00.492 "state": "completed", 00:11:00.492 "digest": "sha256", 00:11:00.492 "dhgroup": "ffdhe4096" 00:11:00.492 } 00:11:00.492 } 00:11:00.492 ]' 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.492 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.061 20:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:01.685 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:01.944 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.203 00:11:02.203 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.203 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.203 20:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.462 { 00:11:02.462 "cntlid": 31, 00:11:02.462 "qid": 0, 00:11:02.462 "state": "enabled", 00:11:02.462 "thread": "nvmf_tgt_poll_group_000", 00:11:02.462 "listen_address": { 00:11:02.462 "trtype": "TCP", 00:11:02.462 "adrfam": "IPv4", 00:11:02.462 "traddr": "10.0.0.3", 00:11:02.462 "trsvcid": "4420" 00:11:02.462 }, 00:11:02.462 "peer_address": { 00:11:02.462 "trtype": "TCP", 00:11:02.462 "adrfam": "IPv4", 00:11:02.462 "traddr": "10.0.0.1", 00:11:02.462 "trsvcid": "36792" 00:11:02.462 }, 00:11:02.462 "auth": { 00:11:02.462 "state": "completed", 00:11:02.462 "digest": "sha256", 00:11:02.462 "dhgroup": "ffdhe4096" 00:11:02.462 } 00:11:02.462 } 00:11:02.462 ]' 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.462 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.721 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:02.721 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.721 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.721 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.721 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.980 20:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.548 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:03.805 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:03.805 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:03.805 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:03.805 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:03.805 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.806 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.064 00:11:04.064 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.064 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.064 20:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.632 { 00:11:04.632 "cntlid": 33, 00:11:04.632 "qid": 0, 00:11:04.632 "state": "enabled", 00:11:04.632 "thread": "nvmf_tgt_poll_group_000", 00:11:04.632 "listen_address": { 00:11:04.632 "trtype": "TCP", 00:11:04.632 "adrfam": "IPv4", 00:11:04.632 "traddr": "10.0.0.3", 00:11:04.632 "trsvcid": "4420" 00:11:04.632 }, 00:11:04.632 "peer_address": { 00:11:04.632 "trtype": "TCP", 00:11:04.632 "adrfam": "IPv4", 00:11:04.632 "traddr": "10.0.0.1", 00:11:04.632 "trsvcid": "36822" 00:11:04.632 }, 00:11:04.632 "auth": { 00:11:04.632 "state": "completed", 00:11:04.632 "digest": "sha256", 00:11:04.632 "dhgroup": "ffdhe6144" 00:11:04.632 } 00:11:04.632 } 00:11:04.632 ]' 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.632 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.891 20:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:05.459 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.718 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.287 00:11:06.287 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.287 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.287 20:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.287 { 00:11:06.287 "cntlid": 35, 00:11:06.287 "qid": 0, 00:11:06.287 "state": "enabled", 00:11:06.287 "thread": "nvmf_tgt_poll_group_000", 00:11:06.287 "listen_address": { 00:11:06.287 "trtype": "TCP", 00:11:06.287 "adrfam": "IPv4", 00:11:06.287 "traddr": "10.0.0.3", 00:11:06.287 "trsvcid": "4420" 00:11:06.287 }, 00:11:06.287 "peer_address": { 00:11:06.287 "trtype": "TCP", 00:11:06.287 "adrfam": "IPv4", 00:11:06.287 "traddr": "10.0.0.1", 00:11:06.287 "trsvcid": "36838" 00:11:06.287 }, 00:11:06.287 "auth": { 00:11:06.287 "state": "completed", 00:11:06.287 "digest": "sha256", 00:11:06.287 "dhgroup": "ffdhe6144" 00:11:06.287 } 00:11:06.287 } 00:11:06.287 ]' 00:11:06.287 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.546 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.805 20:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.372 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.630 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.197 00:11:08.197 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.197 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.197 20:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.455 { 00:11:08.455 "cntlid": 37, 00:11:08.455 "qid": 0, 00:11:08.455 "state": "enabled", 00:11:08.455 "thread": "nvmf_tgt_poll_group_000", 00:11:08.455 "listen_address": { 00:11:08.455 "trtype": "TCP", 00:11:08.455 "adrfam": "IPv4", 00:11:08.455 "traddr": "10.0.0.3", 00:11:08.455 "trsvcid": "4420" 00:11:08.455 }, 00:11:08.455 "peer_address": { 00:11:08.455 "trtype": "TCP", 00:11:08.455 "adrfam": "IPv4", 00:11:08.455 "traddr": "10.0.0.1", 00:11:08.455 "trsvcid": "58800" 00:11:08.455 }, 00:11:08.455 "auth": { 00:11:08.455 "state": "completed", 00:11:08.455 "digest": "sha256", 00:11:08.455 "dhgroup": "ffdhe6144" 00:11:08.455 } 00:11:08.455 } 00:11:08.455 ]' 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.455 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.714 20:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:09.649 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:09.650 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.230 00:11:10.230 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.231 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.231 20:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.501 { 00:11:10.501 "cntlid": 39, 00:11:10.501 "qid": 0, 00:11:10.501 "state": "enabled", 00:11:10.501 "thread": "nvmf_tgt_poll_group_000", 00:11:10.501 "listen_address": { 00:11:10.501 "trtype": "TCP", 00:11:10.501 "adrfam": "IPv4", 00:11:10.501 "traddr": "10.0.0.3", 00:11:10.501 "trsvcid": "4420" 00:11:10.501 }, 00:11:10.501 "peer_address": { 00:11:10.501 "trtype": "TCP", 00:11:10.501 "adrfam": "IPv4", 00:11:10.501 "traddr": "10.0.0.1", 00:11:10.501 "trsvcid": "58812" 00:11:10.501 }, 00:11:10.501 "auth": { 00:11:10.501 "state": "completed", 00:11:10.501 "digest": "sha256", 00:11:10.501 "dhgroup": "ffdhe6144" 00:11:10.501 } 00:11:10.501 } 00:11:10.501 ]' 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.501 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.502 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.759 20:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.693 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.259 00:11:12.259 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.259 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.259 20:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.517 { 00:11:12.517 "cntlid": 41, 00:11:12.517 "qid": 0, 00:11:12.517 "state": "enabled", 00:11:12.517 "thread": "nvmf_tgt_poll_group_000", 00:11:12.517 "listen_address": { 00:11:12.517 "trtype": "TCP", 00:11:12.517 "adrfam": "IPv4", 00:11:12.517 "traddr": "10.0.0.3", 00:11:12.517 "trsvcid": "4420" 00:11:12.517 }, 00:11:12.517 "peer_address": { 00:11:12.517 "trtype": "TCP", 00:11:12.517 "adrfam": "IPv4", 00:11:12.517 "traddr": "10.0.0.1", 00:11:12.517 "trsvcid": "58836" 00:11:12.517 }, 00:11:12.517 "auth": { 00:11:12.517 "state": "completed", 00:11:12.517 "digest": "sha256", 00:11:12.517 "dhgroup": "ffdhe8192" 00:11:12.517 } 00:11:12.517 } 00:11:12.517 ]' 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:12.517 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.776 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.776 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.776 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.776 20:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:13.342 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.342 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:13.342 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:13.342 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.600 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:13.600 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.600 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.600 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.600 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.601 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.167 00:11:14.167 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.167 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.167 20:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.425 { 00:11:14.425 "cntlid": 43, 00:11:14.425 "qid": 0, 00:11:14.425 "state": "enabled", 00:11:14.425 "thread": "nvmf_tgt_poll_group_000", 00:11:14.425 "listen_address": { 00:11:14.425 "trtype": "TCP", 00:11:14.425 "adrfam": "IPv4", 00:11:14.425 "traddr": "10.0.0.3", 00:11:14.425 "trsvcid": "4420" 00:11:14.425 }, 00:11:14.425 "peer_address": { 00:11:14.425 "trtype": "TCP", 00:11:14.425 "adrfam": "IPv4", 00:11:14.425 "traddr": "10.0.0.1", 00:11:14.425 "trsvcid": "58868" 00:11:14.425 }, 00:11:14.425 "auth": { 00:11:14.425 "state": "completed", 00:11:14.425 "digest": "sha256", 00:11:14.425 "dhgroup": "ffdhe8192" 00:11:14.425 } 00:11:14.425 } 00:11:14.425 ]' 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:14.425 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.683 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.683 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.683 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.942 20:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:15.509 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:15.767 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:15.767 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.767 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.767 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:15.767 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:15.767 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.768 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.768 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:15.768 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.768 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:15.768 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.768 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.335 00:11:16.335 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.335 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.335 20:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.593 { 00:11:16.593 "cntlid": 45, 00:11:16.593 "qid": 0, 00:11:16.593 "state": "enabled", 00:11:16.593 "thread": "nvmf_tgt_poll_group_000", 00:11:16.593 "listen_address": { 00:11:16.593 "trtype": "TCP", 00:11:16.593 "adrfam": "IPv4", 00:11:16.593 "traddr": "10.0.0.3", 00:11:16.593 "trsvcid": "4420" 00:11:16.593 }, 00:11:16.593 "peer_address": { 00:11:16.593 "trtype": "TCP", 00:11:16.593 "adrfam": "IPv4", 00:11:16.593 "traddr": "10.0.0.1", 00:11:16.593 "trsvcid": "58888" 00:11:16.593 }, 00:11:16.593 "auth": { 00:11:16.593 "state": "completed", 00:11:16.593 "digest": "sha256", 00:11:16.593 "dhgroup": "ffdhe8192" 00:11:16.593 } 00:11:16.593 } 00:11:16.593 ]' 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.593 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.852 20:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.419 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.678 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.245 00:11:18.245 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.245 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.245 20:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.504 { 00:11:18.504 "cntlid": 47, 00:11:18.504 "qid": 0, 00:11:18.504 "state": "enabled", 00:11:18.504 "thread": "nvmf_tgt_poll_group_000", 00:11:18.504 "listen_address": { 00:11:18.504 "trtype": "TCP", 00:11:18.504 "adrfam": "IPv4", 00:11:18.504 "traddr": "10.0.0.3", 00:11:18.504 "trsvcid": "4420" 00:11:18.504 }, 00:11:18.504 "peer_address": { 00:11:18.504 "trtype": "TCP", 00:11:18.504 "adrfam": "IPv4", 00:11:18.504 "traddr": "10.0.0.1", 00:11:18.504 "trsvcid": "52990" 00:11:18.504 }, 00:11:18.504 "auth": { 00:11:18.504 "state": "completed", 00:11:18.504 "digest": "sha256", 00:11:18.504 "dhgroup": "ffdhe8192" 00:11:18.504 } 00:11:18.504 } 00:11:18.504 ]' 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.504 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.762 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:18.762 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.762 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.762 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.762 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.021 20:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.588 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.847 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.106 00:11:20.106 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.106 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.106 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.365 { 00:11:20.365 "cntlid": 49, 00:11:20.365 "qid": 0, 00:11:20.365 "state": "enabled", 00:11:20.365 "thread": "nvmf_tgt_poll_group_000", 00:11:20.365 "listen_address": { 00:11:20.365 "trtype": "TCP", 00:11:20.365 "adrfam": "IPv4", 00:11:20.365 "traddr": "10.0.0.3", 00:11:20.365 "trsvcid": "4420" 00:11:20.365 }, 00:11:20.365 "peer_address": { 00:11:20.365 "trtype": "TCP", 00:11:20.365 "adrfam": "IPv4", 00:11:20.365 "traddr": "10.0.0.1", 00:11:20.365 "trsvcid": "53006" 00:11:20.365 }, 00:11:20.365 "auth": { 00:11:20.365 "state": "completed", 00:11:20.365 "digest": "sha384", 00:11:20.365 "dhgroup": "null" 00:11:20.365 } 00:11:20.365 } 00:11:20.365 ]' 00:11:20.365 20:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.365 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.624 20:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.560 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.127 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.127 { 00:11:22.127 "cntlid": 51, 00:11:22.127 "qid": 0, 00:11:22.127 "state": "enabled", 00:11:22.127 "thread": "nvmf_tgt_poll_group_000", 00:11:22.127 "listen_address": { 00:11:22.127 "trtype": "TCP", 00:11:22.127 "adrfam": "IPv4", 00:11:22.127 "traddr": "10.0.0.3", 00:11:22.127 "trsvcid": "4420" 00:11:22.127 }, 00:11:22.127 "peer_address": { 00:11:22.127 "trtype": "TCP", 00:11:22.127 "adrfam": "IPv4", 00:11:22.127 "traddr": "10.0.0.1", 00:11:22.127 "trsvcid": "53026" 00:11:22.127 }, 00:11:22.127 "auth": { 00:11:22.127 "state": "completed", 00:11:22.127 "digest": "sha384", 00:11:22.127 "dhgroup": "null" 00:11:22.127 } 00:11:22.127 } 00:11:22.127 ]' 00:11:22.127 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.386 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.386 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.386 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:22.386 20:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.386 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.386 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.386 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.643 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.209 20:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.467 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.726 00:11:23.726 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.726 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.726 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.984 { 00:11:23.984 "cntlid": 53, 00:11:23.984 "qid": 0, 00:11:23.984 "state": "enabled", 00:11:23.984 "thread": "nvmf_tgt_poll_group_000", 00:11:23.984 "listen_address": { 00:11:23.984 "trtype": "TCP", 00:11:23.984 "adrfam": "IPv4", 00:11:23.984 "traddr": "10.0.0.3", 00:11:23.984 "trsvcid": "4420" 00:11:23.984 }, 00:11:23.984 "peer_address": { 00:11:23.984 "trtype": "TCP", 00:11:23.984 "adrfam": "IPv4", 00:11:23.984 "traddr": "10.0.0.1", 00:11:23.984 "trsvcid": "53062" 00:11:23.984 }, 00:11:23.984 "auth": { 00:11:23.984 "state": "completed", 00:11:23.984 "digest": "sha384", 00:11:23.984 "dhgroup": "null" 00:11:23.984 } 00:11:23.984 } 00:11:23.984 ]' 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:23.984 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.243 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.243 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.243 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.243 20:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:24.810 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.069 20:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.327 00:11:25.586 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.586 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.586 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.845 { 00:11:25.845 "cntlid": 55, 00:11:25.845 "qid": 0, 00:11:25.845 "state": "enabled", 00:11:25.845 "thread": "nvmf_tgt_poll_group_000", 00:11:25.845 "listen_address": { 00:11:25.845 "trtype": "TCP", 00:11:25.845 "adrfam": "IPv4", 00:11:25.845 "traddr": "10.0.0.3", 00:11:25.845 "trsvcid": "4420" 00:11:25.845 }, 00:11:25.845 "peer_address": { 00:11:25.845 "trtype": "TCP", 00:11:25.845 "adrfam": "IPv4", 00:11:25.845 "traddr": "10.0.0.1", 00:11:25.845 "trsvcid": "53086" 00:11:25.845 }, 00:11:25.845 "auth": { 00:11:25.845 "state": "completed", 00:11:25.845 "digest": "sha384", 00:11:25.845 "dhgroup": "null" 00:11:25.845 } 00:11:25.845 } 00:11:25.845 ]' 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.845 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.104 20:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:26.671 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.929 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.188 00:11:27.188 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.188 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.188 20:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.446 { 00:11:27.446 "cntlid": 57, 00:11:27.446 "qid": 0, 00:11:27.446 "state": "enabled", 00:11:27.446 "thread": "nvmf_tgt_poll_group_000", 00:11:27.446 "listen_address": { 00:11:27.446 "trtype": "TCP", 00:11:27.446 "adrfam": "IPv4", 00:11:27.446 "traddr": "10.0.0.3", 00:11:27.446 "trsvcid": "4420" 00:11:27.446 }, 00:11:27.446 "peer_address": { 00:11:27.446 "trtype": "TCP", 00:11:27.446 "adrfam": "IPv4", 00:11:27.446 "traddr": "10.0.0.1", 00:11:27.446 "trsvcid": "60538" 00:11:27.446 }, 00:11:27.446 "auth": { 00:11:27.446 "state": "completed", 00:11:27.446 "digest": "sha384", 00:11:27.446 "dhgroup": "ffdhe2048" 00:11:27.446 } 00:11:27.446 } 00:11:27.446 ]' 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:27.446 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.705 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.705 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.705 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.963 20:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:28.539 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.812 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.070 00:11:29.070 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.070 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.070 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.328 { 00:11:29.328 "cntlid": 59, 00:11:29.328 "qid": 0, 00:11:29.328 "state": "enabled", 00:11:29.328 "thread": "nvmf_tgt_poll_group_000", 00:11:29.328 "listen_address": { 00:11:29.328 "trtype": "TCP", 00:11:29.328 "adrfam": "IPv4", 00:11:29.328 "traddr": "10.0.0.3", 00:11:29.328 "trsvcid": "4420" 00:11:29.328 }, 00:11:29.328 "peer_address": { 00:11:29.328 "trtype": "TCP", 00:11:29.328 "adrfam": "IPv4", 00:11:29.328 "traddr": "10.0.0.1", 00:11:29.328 "trsvcid": "60572" 00:11:29.328 }, 00:11:29.328 "auth": { 00:11:29.328 "state": "completed", 00:11:29.328 "digest": "sha384", 00:11:29.328 "dhgroup": "ffdhe2048" 00:11:29.328 } 00:11:29.328 } 00:11:29.328 ]' 00:11:29.328 20:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.328 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.328 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.328 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:29.328 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.587 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.587 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.587 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.845 20:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:30.413 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.672 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.930 00:11:30.930 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.930 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.930 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.189 { 00:11:31.189 "cntlid": 61, 00:11:31.189 "qid": 0, 00:11:31.189 "state": "enabled", 00:11:31.189 "thread": "nvmf_tgt_poll_group_000", 00:11:31.189 "listen_address": { 00:11:31.189 "trtype": "TCP", 00:11:31.189 "adrfam": "IPv4", 00:11:31.189 "traddr": "10.0.0.3", 00:11:31.189 "trsvcid": "4420" 00:11:31.189 }, 00:11:31.189 "peer_address": { 00:11:31.189 "trtype": "TCP", 00:11:31.189 "adrfam": "IPv4", 00:11:31.189 "traddr": "10.0.0.1", 00:11:31.189 "trsvcid": "60594" 00:11:31.189 }, 00:11:31.189 "auth": { 00:11:31.189 "state": "completed", 00:11:31.189 "digest": "sha384", 00:11:31.189 "dhgroup": "ffdhe2048" 00:11:31.189 } 00:11:31.189 } 00:11:31.189 ]' 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.189 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.447 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:31.447 20:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.447 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.447 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.447 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.705 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.272 20:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.531 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:32.790 00:11:32.790 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.790 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.790 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.048 { 00:11:33.048 "cntlid": 63, 00:11:33.048 "qid": 0, 00:11:33.048 "state": "enabled", 00:11:33.048 "thread": "nvmf_tgt_poll_group_000", 00:11:33.048 "listen_address": { 00:11:33.048 "trtype": "TCP", 00:11:33.048 "adrfam": "IPv4", 00:11:33.048 "traddr": "10.0.0.3", 00:11:33.048 "trsvcid": "4420" 00:11:33.048 }, 00:11:33.048 "peer_address": { 00:11:33.048 "trtype": "TCP", 00:11:33.048 "adrfam": "IPv4", 00:11:33.048 "traddr": "10.0.0.1", 00:11:33.048 "trsvcid": "60630" 00:11:33.048 }, 00:11:33.048 "auth": { 00:11:33.048 "state": "completed", 00:11:33.048 "digest": "sha384", 00:11:33.048 "dhgroup": "ffdhe2048" 00:11:33.048 } 00:11:33.048 } 00:11:33.048 ]' 00:11:33.048 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.307 20:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.565 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.132 20:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.390 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:34.390 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.390 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.391 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.958 00:11:34.958 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.958 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.958 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.217 { 00:11:35.217 "cntlid": 65, 00:11:35.217 "qid": 0, 00:11:35.217 "state": "enabled", 00:11:35.217 "thread": "nvmf_tgt_poll_group_000", 00:11:35.217 "listen_address": { 00:11:35.217 "trtype": "TCP", 00:11:35.217 "adrfam": "IPv4", 00:11:35.217 "traddr": "10.0.0.3", 00:11:35.217 "trsvcid": "4420" 00:11:35.217 }, 00:11:35.217 "peer_address": { 00:11:35.217 "trtype": "TCP", 00:11:35.217 "adrfam": "IPv4", 00:11:35.217 "traddr": "10.0.0.1", 00:11:35.217 "trsvcid": "60656" 00:11:35.217 }, 00:11:35.217 "auth": { 00:11:35.217 "state": "completed", 00:11:35.217 "digest": "sha384", 00:11:35.217 "dhgroup": "ffdhe3072" 00:11:35.217 } 00:11:35.217 } 00:11:35.217 ]' 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.217 20:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.475 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:36.043 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.301 20:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.559 00:11:36.559 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.559 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.559 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.818 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.818 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.818 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:36.818 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.076 { 00:11:37.076 "cntlid": 67, 00:11:37.076 "qid": 0, 00:11:37.076 "state": "enabled", 00:11:37.076 "thread": "nvmf_tgt_poll_group_000", 00:11:37.076 "listen_address": { 00:11:37.076 "trtype": "TCP", 00:11:37.076 "adrfam": "IPv4", 00:11:37.076 "traddr": "10.0.0.3", 00:11:37.076 "trsvcid": "4420" 00:11:37.076 }, 00:11:37.076 "peer_address": { 00:11:37.076 "trtype": "TCP", 00:11:37.076 "adrfam": "IPv4", 00:11:37.076 "traddr": "10.0.0.1", 00:11:37.076 "trsvcid": "52452" 00:11:37.076 }, 00:11:37.076 "auth": { 00:11:37.076 "state": "completed", 00:11:37.076 "digest": "sha384", 00:11:37.076 "dhgroup": "ffdhe3072" 00:11:37.076 } 00:11:37.076 } 00:11:37.076 ]' 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.076 20:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.334 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.269 20:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.837 00:11:38.837 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.837 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.837 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.096 { 00:11:39.096 "cntlid": 69, 00:11:39.096 "qid": 0, 00:11:39.096 "state": "enabled", 00:11:39.096 "thread": "nvmf_tgt_poll_group_000", 00:11:39.096 "listen_address": { 00:11:39.096 "trtype": "TCP", 00:11:39.096 "adrfam": "IPv4", 00:11:39.096 "traddr": "10.0.0.3", 00:11:39.096 "trsvcid": "4420" 00:11:39.096 }, 00:11:39.096 "peer_address": { 00:11:39.096 "trtype": "TCP", 00:11:39.096 "adrfam": "IPv4", 00:11:39.096 "traddr": "10.0.0.1", 00:11:39.096 "trsvcid": "52494" 00:11:39.096 }, 00:11:39.096 "auth": { 00:11:39.096 "state": "completed", 00:11:39.096 "digest": "sha384", 00:11:39.096 "dhgroup": "ffdhe3072" 00:11:39.096 } 00:11:39.096 } 00:11:39.096 ]' 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.096 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.355 20:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:39.921 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.189 20:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.475 00:11:40.475 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.475 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.475 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.734 { 00:11:40.734 "cntlid": 71, 00:11:40.734 "qid": 0, 00:11:40.734 "state": "enabled", 00:11:40.734 "thread": "nvmf_tgt_poll_group_000", 00:11:40.734 "listen_address": { 00:11:40.734 "trtype": "TCP", 00:11:40.734 "adrfam": "IPv4", 00:11:40.734 "traddr": "10.0.0.3", 00:11:40.734 "trsvcid": "4420" 00:11:40.734 }, 00:11:40.734 "peer_address": { 00:11:40.734 "trtype": "TCP", 00:11:40.734 "adrfam": "IPv4", 00:11:40.734 "traddr": "10.0.0.1", 00:11:40.734 "trsvcid": "52516" 00:11:40.734 }, 00:11:40.734 "auth": { 00:11:40.734 "state": "completed", 00:11:40.734 "digest": "sha384", 00:11:40.734 "dhgroup": "ffdhe3072" 00:11:40.734 } 00:11:40.734 } 00:11:40.734 ]' 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.734 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.992 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:40.992 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.992 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.993 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.993 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.252 20:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:41.819 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.078 20:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.336 00:11:42.336 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.336 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.336 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.596 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.596 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.596 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:42.596 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.854 { 00:11:42.854 "cntlid": 73, 00:11:42.854 "qid": 0, 00:11:42.854 "state": "enabled", 00:11:42.854 "thread": "nvmf_tgt_poll_group_000", 00:11:42.854 "listen_address": { 00:11:42.854 "trtype": "TCP", 00:11:42.854 "adrfam": "IPv4", 00:11:42.854 "traddr": "10.0.0.3", 00:11:42.854 "trsvcid": "4420" 00:11:42.854 }, 00:11:42.854 "peer_address": { 00:11:42.854 "trtype": "TCP", 00:11:42.854 "adrfam": "IPv4", 00:11:42.854 "traddr": "10.0.0.1", 00:11:42.854 "trsvcid": "52560" 00:11:42.854 }, 00:11:42.854 "auth": { 00:11:42.854 "state": "completed", 00:11:42.854 "digest": "sha384", 00:11:42.854 "dhgroup": "ffdhe4096" 00:11:42.854 } 00:11:42.854 } 00:11:42.854 ]' 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.854 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.112 20:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:43.680 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.939 20:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.507 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:44.507 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.766 { 00:11:44.766 "cntlid": 75, 00:11:44.766 "qid": 0, 00:11:44.766 "state": "enabled", 00:11:44.766 "thread": "nvmf_tgt_poll_group_000", 00:11:44.766 "listen_address": { 00:11:44.766 "trtype": "TCP", 00:11:44.766 "adrfam": "IPv4", 00:11:44.766 "traddr": "10.0.0.3", 00:11:44.766 "trsvcid": "4420" 00:11:44.766 }, 00:11:44.766 "peer_address": { 00:11:44.766 "trtype": "TCP", 00:11:44.766 "adrfam": "IPv4", 00:11:44.766 "traddr": "10.0.0.1", 00:11:44.766 "trsvcid": "52602" 00:11:44.766 }, 00:11:44.766 "auth": { 00:11:44.766 "state": "completed", 00:11:44.766 "digest": "sha384", 00:11:44.766 "dhgroup": "ffdhe4096" 00:11:44.766 } 00:11:44.766 } 00:11:44.766 ]' 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.766 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.025 20:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:45.593 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.852 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.420 00:11:46.420 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.420 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.420 20:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.679 { 00:11:46.679 "cntlid": 77, 00:11:46.679 "qid": 0, 00:11:46.679 "state": "enabled", 00:11:46.679 "thread": "nvmf_tgt_poll_group_000", 00:11:46.679 "listen_address": { 00:11:46.679 "trtype": "TCP", 00:11:46.679 "adrfam": "IPv4", 00:11:46.679 "traddr": "10.0.0.3", 00:11:46.679 "trsvcid": "4420" 00:11:46.679 }, 00:11:46.679 "peer_address": { 00:11:46.679 "trtype": "TCP", 00:11:46.679 "adrfam": "IPv4", 00:11:46.679 "traddr": "10.0.0.1", 00:11:46.679 "trsvcid": "52626" 00:11:46.679 }, 00:11:46.679 "auth": { 00:11:46.679 "state": "completed", 00:11:46.679 "digest": "sha384", 00:11:46.679 "dhgroup": "ffdhe4096" 00:11:46.679 } 00:11:46.679 } 00:11:46.679 ]' 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.679 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.938 20:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:47.506 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:47.764 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.332 00:11:48.332 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.332 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.332 20:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.332 { 00:11:48.332 "cntlid": 79, 00:11:48.332 "qid": 0, 00:11:48.332 "state": "enabled", 00:11:48.332 "thread": "nvmf_tgt_poll_group_000", 00:11:48.332 "listen_address": { 00:11:48.332 "trtype": "TCP", 00:11:48.332 "adrfam": "IPv4", 00:11:48.332 "traddr": "10.0.0.3", 00:11:48.332 "trsvcid": "4420" 00:11:48.332 }, 00:11:48.332 "peer_address": { 00:11:48.332 "trtype": "TCP", 00:11:48.332 "adrfam": "IPv4", 00:11:48.332 "traddr": "10.0.0.1", 00:11:48.332 "trsvcid": "48090" 00:11:48.332 }, 00:11:48.332 "auth": { 00:11:48.332 "state": "completed", 00:11:48.332 "digest": "sha384", 00:11:48.332 "dhgroup": "ffdhe4096" 00:11:48.332 } 00:11:48.332 } 00:11:48.332 ]' 00:11:48.332 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.591 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.850 20:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:49.417 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.676 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.244 00:11:50.244 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.244 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.244 20:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.502 { 00:11:50.502 "cntlid": 81, 00:11:50.502 "qid": 0, 00:11:50.502 "state": "enabled", 00:11:50.502 "thread": "nvmf_tgt_poll_group_000", 00:11:50.502 "listen_address": { 00:11:50.502 "trtype": "TCP", 00:11:50.502 "adrfam": "IPv4", 00:11:50.502 "traddr": "10.0.0.3", 00:11:50.502 "trsvcid": "4420" 00:11:50.502 }, 00:11:50.502 "peer_address": { 00:11:50.502 "trtype": "TCP", 00:11:50.502 "adrfam": "IPv4", 00:11:50.502 "traddr": "10.0.0.1", 00:11:50.502 "trsvcid": "48126" 00:11:50.502 }, 00:11:50.502 "auth": { 00:11:50.502 "state": "completed", 00:11:50.502 "digest": "sha384", 00:11:50.502 "dhgroup": "ffdhe6144" 00:11:50.502 } 00:11:50.502 } 00:11:50.502 ]' 00:11:50.502 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.503 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.503 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.503 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.503 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.761 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.761 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.761 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.019 20:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:51.589 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.589 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:51.589 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:51.589 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.589 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:51.590 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.590 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:51.590 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.855 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.423 00:11:52.423 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.423 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.423 20:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.681 { 00:11:52.681 "cntlid": 83, 00:11:52.681 "qid": 0, 00:11:52.681 "state": "enabled", 00:11:52.681 "thread": "nvmf_tgt_poll_group_000", 00:11:52.681 "listen_address": { 00:11:52.681 "trtype": "TCP", 00:11:52.681 "adrfam": "IPv4", 00:11:52.681 "traddr": "10.0.0.3", 00:11:52.681 "trsvcid": "4420" 00:11:52.681 }, 00:11:52.681 "peer_address": { 00:11:52.681 "trtype": "TCP", 00:11:52.681 "adrfam": "IPv4", 00:11:52.681 "traddr": "10.0.0.1", 00:11:52.681 "trsvcid": "48144" 00:11:52.681 }, 00:11:52.681 "auth": { 00:11:52.681 "state": "completed", 00:11:52.681 "digest": "sha384", 00:11:52.681 "dhgroup": "ffdhe6144" 00:11:52.681 } 00:11:52.681 } 00:11:52.681 ]' 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.681 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.939 20:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:53.507 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.766 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.767 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:53.767 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.767 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:53.767 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.767 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.333 00:11:54.333 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.333 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.333 20:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.592 { 00:11:54.592 "cntlid": 85, 00:11:54.592 "qid": 0, 00:11:54.592 "state": "enabled", 00:11:54.592 "thread": "nvmf_tgt_poll_group_000", 00:11:54.592 "listen_address": { 00:11:54.592 "trtype": "TCP", 00:11:54.592 "adrfam": "IPv4", 00:11:54.592 "traddr": "10.0.0.3", 00:11:54.592 "trsvcid": "4420" 00:11:54.592 }, 00:11:54.592 "peer_address": { 00:11:54.592 "trtype": "TCP", 00:11:54.592 "adrfam": "IPv4", 00:11:54.592 "traddr": "10.0.0.1", 00:11:54.592 "trsvcid": "48180" 00:11:54.592 }, 00:11:54.592 "auth": { 00:11:54.592 "state": "completed", 00:11:54.592 "digest": "sha384", 00:11:54.592 "dhgroup": "ffdhe6144" 00:11:54.592 } 00:11:54.592 } 00:11:54.592 ]' 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.592 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.851 20:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:55.418 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:55.677 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:56.244 00:11:56.244 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.244 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.244 20:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.244 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.244 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.244 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:56.244 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.502 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.503 { 00:11:56.503 "cntlid": 87, 00:11:56.503 "qid": 0, 00:11:56.503 "state": "enabled", 00:11:56.503 "thread": "nvmf_tgt_poll_group_000", 00:11:56.503 "listen_address": { 00:11:56.503 "trtype": "TCP", 00:11:56.503 "adrfam": "IPv4", 00:11:56.503 "traddr": "10.0.0.3", 00:11:56.503 "trsvcid": "4420" 00:11:56.503 }, 00:11:56.503 "peer_address": { 00:11:56.503 "trtype": "TCP", 00:11:56.503 "adrfam": "IPv4", 00:11:56.503 "traddr": "10.0.0.1", 00:11:56.503 "trsvcid": "48198" 00:11:56.503 }, 00:11:56.503 "auth": { 00:11:56.503 "state": "completed", 00:11:56.503 "digest": "sha384", 00:11:56.503 "dhgroup": "ffdhe6144" 00:11:56.503 } 00:11:56.503 } 00:11:56.503 ]' 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.503 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.761 20:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:11:57.327 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.327 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:57.327 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:57.327 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.585 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:57.585 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.585 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:57.585 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:57.585 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:57.585 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.586 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.152 00:11:58.410 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.410 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.410 20:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.668 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.668 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.668 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:58.668 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.668 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:58.668 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:58.668 { 00:11:58.668 "cntlid": 89, 00:11:58.668 "qid": 0, 00:11:58.669 "state": "enabled", 00:11:58.669 "thread": "nvmf_tgt_poll_group_000", 00:11:58.669 "listen_address": { 00:11:58.669 "trtype": "TCP", 00:11:58.669 "adrfam": "IPv4", 00:11:58.669 "traddr": "10.0.0.3", 00:11:58.669 "trsvcid": "4420" 00:11:58.669 }, 00:11:58.669 "peer_address": { 00:11:58.669 "trtype": "TCP", 00:11:58.669 "adrfam": "IPv4", 00:11:58.669 "traddr": "10.0.0.1", 00:11:58.669 "trsvcid": "49554" 00:11:58.669 }, 00:11:58.669 "auth": { 00:11:58.669 "state": "completed", 00:11:58.669 "digest": "sha384", 00:11:58.669 "dhgroup": "ffdhe8192" 00:11:58.669 } 00:11:58.669 } 00:11:58.669 ]' 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.669 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.927 20:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:59.495 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:59.754 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:59.754 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:59.754 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:59.754 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.755 20:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.321 00:12:00.321 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.321 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.321 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:00.581 { 00:12:00.581 "cntlid": 91, 00:12:00.581 "qid": 0, 00:12:00.581 "state": "enabled", 00:12:00.581 "thread": "nvmf_tgt_poll_group_000", 00:12:00.581 "listen_address": { 00:12:00.581 "trtype": "TCP", 00:12:00.581 "adrfam": "IPv4", 00:12:00.581 "traddr": "10.0.0.3", 00:12:00.581 "trsvcid": "4420" 00:12:00.581 }, 00:12:00.581 "peer_address": { 00:12:00.581 "trtype": "TCP", 00:12:00.581 "adrfam": "IPv4", 00:12:00.581 "traddr": "10.0.0.1", 00:12:00.581 "trsvcid": "49590" 00:12:00.581 }, 00:12:00.581 "auth": { 00:12:00.581 "state": "completed", 00:12:00.581 "digest": "sha384", 00:12:00.581 "dhgroup": "ffdhe8192" 00:12:00.581 } 00:12:00.581 } 00:12:00.581 ]' 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.581 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.841 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:00.841 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.841 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.841 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.841 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.100 20:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.667 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.925 20:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.491 00:12:02.491 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.491 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.491 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.757 { 00:12:02.757 "cntlid": 93, 00:12:02.757 "qid": 0, 00:12:02.757 "state": "enabled", 00:12:02.757 "thread": "nvmf_tgt_poll_group_000", 00:12:02.757 "listen_address": { 00:12:02.757 "trtype": "TCP", 00:12:02.757 "adrfam": "IPv4", 00:12:02.757 "traddr": "10.0.0.3", 00:12:02.757 "trsvcid": "4420" 00:12:02.757 }, 00:12:02.757 "peer_address": { 00:12:02.757 "trtype": "TCP", 00:12:02.757 "adrfam": "IPv4", 00:12:02.757 "traddr": "10.0.0.1", 00:12:02.757 "trsvcid": "49614" 00:12:02.757 }, 00:12:02.757 "auth": { 00:12:02.757 "state": "completed", 00:12:02.757 "digest": "sha384", 00:12:02.757 "dhgroup": "ffdhe8192" 00:12:02.757 } 00:12:02.757 } 00:12:02.757 ]' 00:12:02.757 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.027 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.286 20:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.854 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:04.113 20:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:04.681 00:12:04.681 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.681 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.681 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.940 { 00:12:04.940 "cntlid": 95, 00:12:04.940 "qid": 0, 00:12:04.940 "state": "enabled", 00:12:04.940 "thread": "nvmf_tgt_poll_group_000", 00:12:04.940 "listen_address": { 00:12:04.940 "trtype": "TCP", 00:12:04.940 "adrfam": "IPv4", 00:12:04.940 "traddr": "10.0.0.3", 00:12:04.940 "trsvcid": "4420" 00:12:04.940 }, 00:12:04.940 "peer_address": { 00:12:04.940 "trtype": "TCP", 00:12:04.940 "adrfam": "IPv4", 00:12:04.940 "traddr": "10.0.0.1", 00:12:04.940 "trsvcid": "49648" 00:12:04.940 }, 00:12:04.940 "auth": { 00:12:04.940 "state": "completed", 00:12:04.940 "digest": "sha384", 00:12:04.940 "dhgroup": "ffdhe8192" 00:12:04.940 } 00:12:04.940 } 00:12:04.940 ]' 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.940 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.198 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:05.198 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.198 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.198 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.198 20:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.457 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:06.025 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.285 20:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.544 00:12:06.544 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.544 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.544 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.803 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.803 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.803 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:06.803 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.062 { 00:12:07.062 "cntlid": 97, 00:12:07.062 "qid": 0, 00:12:07.062 "state": "enabled", 00:12:07.062 "thread": "nvmf_tgt_poll_group_000", 00:12:07.062 "listen_address": { 00:12:07.062 "trtype": "TCP", 00:12:07.062 "adrfam": "IPv4", 00:12:07.062 "traddr": "10.0.0.3", 00:12:07.062 "trsvcid": "4420" 00:12:07.062 }, 00:12:07.062 "peer_address": { 00:12:07.062 "trtype": "TCP", 00:12:07.062 "adrfam": "IPv4", 00:12:07.062 "traddr": "10.0.0.1", 00:12:07.062 "trsvcid": "44188" 00:12:07.062 }, 00:12:07.062 "auth": { 00:12:07.062 "state": "completed", 00:12:07.062 "digest": "sha512", 00:12:07.062 "dhgroup": "null" 00:12:07.062 } 00:12:07.062 } 00:12:07.062 ]' 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.062 20:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.321 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:07.888 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.147 20:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.405 00:12:08.405 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.405 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.405 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.972 { 00:12:08.972 "cntlid": 99, 00:12:08.972 "qid": 0, 00:12:08.972 "state": "enabled", 00:12:08.972 "thread": "nvmf_tgt_poll_group_000", 00:12:08.972 "listen_address": { 00:12:08.972 "trtype": "TCP", 00:12:08.972 "adrfam": "IPv4", 00:12:08.972 "traddr": "10.0.0.3", 00:12:08.972 "trsvcid": "4420" 00:12:08.972 }, 00:12:08.972 "peer_address": { 00:12:08.972 "trtype": "TCP", 00:12:08.972 "adrfam": "IPv4", 00:12:08.972 "traddr": "10.0.0.1", 00:12:08.972 "trsvcid": "44224" 00:12:08.972 }, 00:12:08.972 "auth": { 00:12:08.972 "state": "completed", 00:12:08.972 "digest": "sha512", 00:12:08.972 "dhgroup": "null" 00:12:08.972 } 00:12:08.972 } 00:12:08.972 ]' 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.972 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.230 20:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:09.796 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.055 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.314 00:12:10.314 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.314 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.314 20:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.572 { 00:12:10.572 "cntlid": 101, 00:12:10.572 "qid": 0, 00:12:10.572 "state": "enabled", 00:12:10.572 "thread": "nvmf_tgt_poll_group_000", 00:12:10.572 "listen_address": { 00:12:10.572 "trtype": "TCP", 00:12:10.572 "adrfam": "IPv4", 00:12:10.572 "traddr": "10.0.0.3", 00:12:10.572 "trsvcid": "4420" 00:12:10.572 }, 00:12:10.572 "peer_address": { 00:12:10.572 "trtype": "TCP", 00:12:10.572 "adrfam": "IPv4", 00:12:10.572 "traddr": "10.0.0.1", 00:12:10.572 "trsvcid": "44256" 00:12:10.572 }, 00:12:10.572 "auth": { 00:12:10.572 "state": "completed", 00:12:10.572 "digest": "sha512", 00:12:10.572 "dhgroup": "null" 00:12:10.572 } 00:12:10.572 } 00:12:10.572 ]' 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:10.572 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.830 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.830 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.830 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.831 20:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.767 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:12.026 00:12:12.026 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.026 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.026 20:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.284 { 00:12:12.284 "cntlid": 103, 00:12:12.284 "qid": 0, 00:12:12.284 "state": "enabled", 00:12:12.284 "thread": "nvmf_tgt_poll_group_000", 00:12:12.284 "listen_address": { 00:12:12.284 "trtype": "TCP", 00:12:12.284 "adrfam": "IPv4", 00:12:12.284 "traddr": "10.0.0.3", 00:12:12.284 "trsvcid": "4420" 00:12:12.284 }, 00:12:12.284 "peer_address": { 00:12:12.284 "trtype": "TCP", 00:12:12.284 "adrfam": "IPv4", 00:12:12.284 "traddr": "10.0.0.1", 00:12:12.284 "trsvcid": "44298" 00:12:12.284 }, 00:12:12.284 "auth": { 00:12:12.284 "state": "completed", 00:12:12.284 "digest": "sha512", 00:12:12.284 "dhgroup": "null" 00:12:12.284 } 00:12:12.284 } 00:12:12.284 ]' 00:12:12.284 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.543 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.809 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:13.394 20:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.652 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.911 00:12:13.911 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.911 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.911 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.169 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.170 { 00:12:14.170 "cntlid": 105, 00:12:14.170 "qid": 0, 00:12:14.170 "state": "enabled", 00:12:14.170 "thread": "nvmf_tgt_poll_group_000", 00:12:14.170 "listen_address": { 00:12:14.170 "trtype": "TCP", 00:12:14.170 "adrfam": "IPv4", 00:12:14.170 "traddr": "10.0.0.3", 00:12:14.170 "trsvcid": "4420" 00:12:14.170 }, 00:12:14.170 "peer_address": { 00:12:14.170 "trtype": "TCP", 00:12:14.170 "adrfam": "IPv4", 00:12:14.170 "traddr": "10.0.0.1", 00:12:14.170 "trsvcid": "44330" 00:12:14.170 }, 00:12:14.170 "auth": { 00:12:14.170 "state": "completed", 00:12:14.170 "digest": "sha512", 00:12:14.170 "dhgroup": "ffdhe2048" 00:12:14.170 } 00:12:14.170 } 00:12:14.170 ]' 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.170 20:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.427 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:15.360 20:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.360 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.619 00:12:15.619 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.619 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.619 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.185 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.185 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.185 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:16.185 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.185 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:16.185 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.185 { 00:12:16.185 "cntlid": 107, 00:12:16.185 "qid": 0, 00:12:16.185 "state": "enabled", 00:12:16.185 "thread": "nvmf_tgt_poll_group_000", 00:12:16.185 "listen_address": { 00:12:16.185 "trtype": "TCP", 00:12:16.185 "adrfam": "IPv4", 00:12:16.185 "traddr": "10.0.0.3", 00:12:16.185 "trsvcid": "4420" 00:12:16.185 }, 00:12:16.185 "peer_address": { 00:12:16.185 "trtype": "TCP", 00:12:16.185 "adrfam": "IPv4", 00:12:16.185 "traddr": "10.0.0.1", 00:12:16.185 "trsvcid": "44340" 00:12:16.185 }, 00:12:16.185 "auth": { 00:12:16.185 "state": "completed", 00:12:16.185 "digest": "sha512", 00:12:16.185 "dhgroup": "ffdhe2048" 00:12:16.185 } 00:12:16.185 } 00:12:16.185 ]' 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.186 20:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.444 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:17.012 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.271 20:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.530 00:12:17.530 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.530 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.530 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.788 { 00:12:17.788 "cntlid": 109, 00:12:17.788 "qid": 0, 00:12:17.788 "state": "enabled", 00:12:17.788 "thread": "nvmf_tgt_poll_group_000", 00:12:17.788 "listen_address": { 00:12:17.788 "trtype": "TCP", 00:12:17.788 "adrfam": "IPv4", 00:12:17.788 "traddr": "10.0.0.3", 00:12:17.788 "trsvcid": "4420" 00:12:17.788 }, 00:12:17.788 "peer_address": { 00:12:17.788 "trtype": "TCP", 00:12:17.788 "adrfam": "IPv4", 00:12:17.788 "traddr": "10.0.0.1", 00:12:17.788 "trsvcid": "35476" 00:12:17.788 }, 00:12:17.788 "auth": { 00:12:17.788 "state": "completed", 00:12:17.788 "digest": "sha512", 00:12:17.788 "dhgroup": "ffdhe2048" 00:12:17.788 } 00:12:17.788 } 00:12:17.788 ]' 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.788 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.047 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.047 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.047 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.306 20:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:18.875 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.134 20:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.393 00:12:19.393 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.393 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.393 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.652 { 00:12:19.652 "cntlid": 111, 00:12:19.652 "qid": 0, 00:12:19.652 "state": "enabled", 00:12:19.652 "thread": "nvmf_tgt_poll_group_000", 00:12:19.652 "listen_address": { 00:12:19.652 "trtype": "TCP", 00:12:19.652 "adrfam": "IPv4", 00:12:19.652 "traddr": "10.0.0.3", 00:12:19.652 "trsvcid": "4420" 00:12:19.652 }, 00:12:19.652 "peer_address": { 00:12:19.652 "trtype": "TCP", 00:12:19.652 "adrfam": "IPv4", 00:12:19.652 "traddr": "10.0.0.1", 00:12:19.652 "trsvcid": "35498" 00:12:19.652 }, 00:12:19.652 "auth": { 00:12:19.652 "state": "completed", 00:12:19.652 "digest": "sha512", 00:12:19.652 "dhgroup": "ffdhe2048" 00:12:19.652 } 00:12:19.652 } 00:12:19.652 ]' 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.652 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.911 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.911 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.911 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.169 20:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.737 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.304 00:12:21.304 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.304 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.304 20:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.563 { 00:12:21.563 "cntlid": 113, 00:12:21.563 "qid": 0, 00:12:21.563 "state": "enabled", 00:12:21.563 "thread": "nvmf_tgt_poll_group_000", 00:12:21.563 "listen_address": { 00:12:21.563 "trtype": "TCP", 00:12:21.563 "adrfam": "IPv4", 00:12:21.563 "traddr": "10.0.0.3", 00:12:21.563 "trsvcid": "4420" 00:12:21.563 }, 00:12:21.563 "peer_address": { 00:12:21.563 "trtype": "TCP", 00:12:21.563 "adrfam": "IPv4", 00:12:21.563 "traddr": "10.0.0.1", 00:12:21.563 "trsvcid": "35516" 00:12:21.563 }, 00:12:21.563 "auth": { 00:12:21.563 "state": "completed", 00:12:21.563 "digest": "sha512", 00:12:21.563 "dhgroup": "ffdhe3072" 00:12:21.563 } 00:12:21.563 } 00:12:21.563 ]' 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.563 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.822 20:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.413 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.672 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.242 00:12:23.242 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.242 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.242 20:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.242 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.242 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.242 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:23.242 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.501 { 00:12:23.501 "cntlid": 115, 00:12:23.501 "qid": 0, 00:12:23.501 "state": "enabled", 00:12:23.501 "thread": "nvmf_tgt_poll_group_000", 00:12:23.501 "listen_address": { 00:12:23.501 "trtype": "TCP", 00:12:23.501 "adrfam": "IPv4", 00:12:23.501 "traddr": "10.0.0.3", 00:12:23.501 "trsvcid": "4420" 00:12:23.501 }, 00:12:23.501 "peer_address": { 00:12:23.501 "trtype": "TCP", 00:12:23.501 "adrfam": "IPv4", 00:12:23.501 "traddr": "10.0.0.1", 00:12:23.501 "trsvcid": "35542" 00:12:23.501 }, 00:12:23.501 "auth": { 00:12:23.501 "state": "completed", 00:12:23.501 "digest": "sha512", 00:12:23.501 "dhgroup": "ffdhe3072" 00:12:23.501 } 00:12:23.501 } 00:12:23.501 ]' 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.501 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.759 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:24.326 20:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.326 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.585 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.844 00:12:24.844 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.844 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.844 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.103 { 00:12:25.103 "cntlid": 117, 00:12:25.103 "qid": 0, 00:12:25.103 "state": "enabled", 00:12:25.103 "thread": "nvmf_tgt_poll_group_000", 00:12:25.103 "listen_address": { 00:12:25.103 "trtype": "TCP", 00:12:25.103 "adrfam": "IPv4", 00:12:25.103 "traddr": "10.0.0.3", 00:12:25.103 "trsvcid": "4420" 00:12:25.103 }, 00:12:25.103 "peer_address": { 00:12:25.103 "trtype": "TCP", 00:12:25.103 "adrfam": "IPv4", 00:12:25.103 "traddr": "10.0.0.1", 00:12:25.103 "trsvcid": "35568" 00:12:25.103 }, 00:12:25.103 "auth": { 00:12:25.103 "state": "completed", 00:12:25.103 "digest": "sha512", 00:12:25.103 "dhgroup": "ffdhe3072" 00:12:25.103 } 00:12:25.103 } 00:12:25.103 ]' 00:12:25.103 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.361 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.362 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.362 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.362 20:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.362 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.362 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.362 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.620 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:26.188 20:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:26.446 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.705 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:26.705 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.705 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.963 00:12:26.963 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.963 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.963 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.223 { 00:12:27.223 "cntlid": 119, 00:12:27.223 "qid": 0, 00:12:27.223 "state": "enabled", 00:12:27.223 "thread": "nvmf_tgt_poll_group_000", 00:12:27.223 "listen_address": { 00:12:27.223 "trtype": "TCP", 00:12:27.223 "adrfam": "IPv4", 00:12:27.223 "traddr": "10.0.0.3", 00:12:27.223 "trsvcid": "4420" 00:12:27.223 }, 00:12:27.223 "peer_address": { 00:12:27.223 "trtype": "TCP", 00:12:27.223 "adrfam": "IPv4", 00:12:27.223 "traddr": "10.0.0.1", 00:12:27.223 "trsvcid": "54156" 00:12:27.223 }, 00:12:27.223 "auth": { 00:12:27.223 "state": "completed", 00:12:27.223 "digest": "sha512", 00:12:27.223 "dhgroup": "ffdhe3072" 00:12:27.223 } 00:12:27.223 } 00:12:27.223 ]' 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.223 20:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.482 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.482 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.482 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.482 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:28.419 20:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.419 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:28.420 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.420 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:28.420 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.420 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.987 00:12:28.987 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.987 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.987 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.246 { 00:12:29.246 "cntlid": 121, 00:12:29.246 "qid": 0, 00:12:29.246 "state": "enabled", 00:12:29.246 "thread": "nvmf_tgt_poll_group_000", 00:12:29.246 "listen_address": { 00:12:29.246 "trtype": "TCP", 00:12:29.246 "adrfam": "IPv4", 00:12:29.246 "traddr": "10.0.0.3", 00:12:29.246 "trsvcid": "4420" 00:12:29.246 }, 00:12:29.246 "peer_address": { 00:12:29.246 "trtype": "TCP", 00:12:29.246 "adrfam": "IPv4", 00:12:29.246 "traddr": "10.0.0.1", 00:12:29.246 "trsvcid": "54182" 00:12:29.246 }, 00:12:29.246 "auth": { 00:12:29.246 "state": "completed", 00:12:29.246 "digest": "sha512", 00:12:29.246 "dhgroup": "ffdhe4096" 00:12:29.246 } 00:12:29.246 } 00:12:29.246 ]' 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.246 20:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.504 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:30.071 20:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:30.330 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.588 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:30.588 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.588 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.847 00:12:30.847 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.847 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.847 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.105 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.105 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.105 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:31.105 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.105 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:31.105 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.105 { 00:12:31.105 "cntlid": 123, 00:12:31.105 "qid": 0, 00:12:31.105 "state": "enabled", 00:12:31.105 "thread": "nvmf_tgt_poll_group_000", 00:12:31.105 "listen_address": { 00:12:31.105 "trtype": "TCP", 00:12:31.105 "adrfam": "IPv4", 00:12:31.105 "traddr": "10.0.0.3", 00:12:31.105 "trsvcid": "4420" 00:12:31.105 }, 00:12:31.105 "peer_address": { 00:12:31.105 "trtype": "TCP", 00:12:31.105 "adrfam": "IPv4", 00:12:31.106 "traddr": "10.0.0.1", 00:12:31.106 "trsvcid": "54210" 00:12:31.106 }, 00:12:31.106 "auth": { 00:12:31.106 "state": "completed", 00:12:31.106 "digest": "sha512", 00:12:31.106 "dhgroup": "ffdhe4096" 00:12:31.106 } 00:12:31.106 } 00:12:31.106 ]' 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.106 20:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.364 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:31.966 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.224 20:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.790 00:12:32.790 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.790 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.790 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.048 { 00:12:33.048 "cntlid": 125, 00:12:33.048 "qid": 0, 00:12:33.048 "state": "enabled", 00:12:33.048 "thread": "nvmf_tgt_poll_group_000", 00:12:33.048 "listen_address": { 00:12:33.048 "trtype": "TCP", 00:12:33.048 "adrfam": "IPv4", 00:12:33.048 "traddr": "10.0.0.3", 00:12:33.048 "trsvcid": "4420" 00:12:33.048 }, 00:12:33.048 "peer_address": { 00:12:33.048 "trtype": "TCP", 00:12:33.048 "adrfam": "IPv4", 00:12:33.048 "traddr": "10.0.0.1", 00:12:33.048 "trsvcid": "54240" 00:12:33.048 }, 00:12:33.048 "auth": { 00:12:33.048 "state": "completed", 00:12:33.048 "digest": "sha512", 00:12:33.048 "dhgroup": "ffdhe4096" 00:12:33.048 } 00:12:33.048 } 00:12:33.048 ]' 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.048 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.307 20:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:33.874 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.133 20:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.701 00:12:34.701 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.701 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:34.701 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:34.960 { 00:12:34.960 "cntlid": 127, 00:12:34.960 "qid": 0, 00:12:34.960 "state": "enabled", 00:12:34.960 "thread": "nvmf_tgt_poll_group_000", 00:12:34.960 "listen_address": { 00:12:34.960 "trtype": "TCP", 00:12:34.960 "adrfam": "IPv4", 00:12:34.960 "traddr": "10.0.0.3", 00:12:34.960 "trsvcid": "4420" 00:12:34.960 }, 00:12:34.960 "peer_address": { 00:12:34.960 "trtype": "TCP", 00:12:34.960 "adrfam": "IPv4", 00:12:34.960 "traddr": "10.0.0.1", 00:12:34.960 "trsvcid": "54266" 00:12:34.960 }, 00:12:34.960 "auth": { 00:12:34.960 "state": "completed", 00:12:34.960 "digest": "sha512", 00:12:34.960 "dhgroup": "ffdhe4096" 00:12:34.960 } 00:12:34.960 } 00:12:34.960 ]' 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.960 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.219 20:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:35.786 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:36.045 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.304 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:36.304 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.304 20:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.562 00:12:36.562 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.562 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.562 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:36.820 { 00:12:36.820 "cntlid": 129, 00:12:36.820 "qid": 0, 00:12:36.820 "state": "enabled", 00:12:36.820 "thread": "nvmf_tgt_poll_group_000", 00:12:36.820 "listen_address": { 00:12:36.820 "trtype": "TCP", 00:12:36.820 "adrfam": "IPv4", 00:12:36.820 "traddr": "10.0.0.3", 00:12:36.820 "trsvcid": "4420" 00:12:36.820 }, 00:12:36.820 "peer_address": { 00:12:36.820 "trtype": "TCP", 00:12:36.820 "adrfam": "IPv4", 00:12:36.820 "traddr": "10.0.0.1", 00:12:36.820 "trsvcid": "54292" 00:12:36.820 }, 00:12:36.820 "auth": { 00:12:36.820 "state": "completed", 00:12:36.820 "digest": "sha512", 00:12:36.820 "dhgroup": "ffdhe6144" 00:12:36.820 } 00:12:36.820 } 00:12:36.820 ]' 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:36.820 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.821 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.078 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.078 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.078 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.078 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.078 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.337 20:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:37.903 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.161 20:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.420 00:12:38.420 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:38.420 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:38.420 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.678 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.678 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.678 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:38.678 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:38.937 { 00:12:38.937 "cntlid": 131, 00:12:38.937 "qid": 0, 00:12:38.937 "state": "enabled", 00:12:38.937 "thread": "nvmf_tgt_poll_group_000", 00:12:38.937 "listen_address": { 00:12:38.937 "trtype": "TCP", 00:12:38.937 "adrfam": "IPv4", 00:12:38.937 "traddr": "10.0.0.3", 00:12:38.937 "trsvcid": "4420" 00:12:38.937 }, 00:12:38.937 "peer_address": { 00:12:38.937 "trtype": "TCP", 00:12:38.937 "adrfam": "IPv4", 00:12:38.937 "traddr": "10.0.0.1", 00:12:38.937 "trsvcid": "39702" 00:12:38.937 }, 00:12:38.937 "auth": { 00:12:38.937 "state": "completed", 00:12:38.937 "digest": "sha512", 00:12:38.937 "dhgroup": "ffdhe6144" 00:12:38.937 } 00:12:38.937 } 00:12:38.937 ]' 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.937 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.196 20:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:39.763 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.022 20:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.590 00:12:40.590 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:40.590 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:40.590 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.851 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:40.852 { 00:12:40.852 "cntlid": 133, 00:12:40.852 "qid": 0, 00:12:40.852 "state": "enabled", 00:12:40.852 "thread": "nvmf_tgt_poll_group_000", 00:12:40.852 "listen_address": { 00:12:40.852 "trtype": "TCP", 00:12:40.852 "adrfam": "IPv4", 00:12:40.852 "traddr": "10.0.0.3", 00:12:40.852 "trsvcid": "4420" 00:12:40.852 }, 00:12:40.852 "peer_address": { 00:12:40.852 "trtype": "TCP", 00:12:40.852 "adrfam": "IPv4", 00:12:40.852 "traddr": "10.0.0.1", 00:12:40.852 "trsvcid": "39748" 00:12:40.852 }, 00:12:40.852 "auth": { 00:12:40.852 "state": "completed", 00:12:40.852 "digest": "sha512", 00:12:40.852 "dhgroup": "ffdhe6144" 00:12:40.852 } 00:12:40.852 } 00:12:40.852 ]' 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.852 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.133 20:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.718 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.976 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:41.977 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:41.977 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.977 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:41.977 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:41.977 20:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.543 00:12:42.543 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:42.543 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:42.543 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.801 { 00:12:42.801 "cntlid": 135, 00:12:42.801 "qid": 0, 00:12:42.801 "state": "enabled", 00:12:42.801 "thread": "nvmf_tgt_poll_group_000", 00:12:42.801 "listen_address": { 00:12:42.801 "trtype": "TCP", 00:12:42.801 "adrfam": "IPv4", 00:12:42.801 "traddr": "10.0.0.3", 00:12:42.801 "trsvcid": "4420" 00:12:42.801 }, 00:12:42.801 "peer_address": { 00:12:42.801 "trtype": "TCP", 00:12:42.801 "adrfam": "IPv4", 00:12:42.801 "traddr": "10.0.0.1", 00:12:42.801 "trsvcid": "39776" 00:12:42.801 }, 00:12:42.801 "auth": { 00:12:42.801 "state": "completed", 00:12:42.801 "digest": "sha512", 00:12:42.801 "dhgroup": "ffdhe6144" 00:12:42.801 } 00:12:42.801 } 00:12:42.801 ]' 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.801 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.059 20:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:43.624 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:43.882 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.139 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:44.139 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.140 20:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:44.706 00:12:44.706 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:44.706 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:44.706 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.964 { 00:12:44.964 "cntlid": 137, 00:12:44.964 "qid": 0, 00:12:44.964 "state": "enabled", 00:12:44.964 "thread": "nvmf_tgt_poll_group_000", 00:12:44.964 "listen_address": { 00:12:44.964 "trtype": "TCP", 00:12:44.964 "adrfam": "IPv4", 00:12:44.964 "traddr": "10.0.0.3", 00:12:44.964 "trsvcid": "4420" 00:12:44.964 }, 00:12:44.964 "peer_address": { 00:12:44.964 "trtype": "TCP", 00:12:44.964 "adrfam": "IPv4", 00:12:44.964 "traddr": "10.0.0.1", 00:12:44.964 "trsvcid": "39796" 00:12:44.964 }, 00:12:44.964 "auth": { 00:12:44.964 "state": "completed", 00:12:44.964 "digest": "sha512", 00:12:44.964 "dhgroup": "ffdhe8192" 00:12:44.964 } 00:12:44.964 } 00:12:44.964 ]' 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.964 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.222 20:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:45.788 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.047 20:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.614 00:12:46.614 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:46.614 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:46.614 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.873 { 00:12:46.873 "cntlid": 139, 00:12:46.873 "qid": 0, 00:12:46.873 "state": "enabled", 00:12:46.873 "thread": "nvmf_tgt_poll_group_000", 00:12:46.873 "listen_address": { 00:12:46.873 "trtype": "TCP", 00:12:46.873 "adrfam": "IPv4", 00:12:46.873 "traddr": "10.0.0.3", 00:12:46.873 "trsvcid": "4420" 00:12:46.873 }, 00:12:46.873 "peer_address": { 00:12:46.873 "trtype": "TCP", 00:12:46.873 "adrfam": "IPv4", 00:12:46.873 "traddr": "10.0.0.1", 00:12:46.873 "trsvcid": "39810" 00:12:46.873 }, 00:12:46.873 "auth": { 00:12:46.873 "state": "completed", 00:12:46.873 "digest": "sha512", 00:12:46.873 "dhgroup": "ffdhe8192" 00:12:46.873 } 00:12:46.873 } 00:12:46.873 ]' 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.873 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.131 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.131 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.132 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.132 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.132 20:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.390 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:01:MzBhNGExYWI4YTA2ZjIxZTA3OTM0M2RhYmQxODg3ODlExEPL: --dhchap-ctrl-secret DHHC-1:02:OTFkZTBlZDNkYjNkYTMxYWRiYjgzOWM4Y2QzZDVlODMwNWUwZWZmYmI4NDg0NjBlpBkJuQ==: 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:47.957 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.216 20:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.783 00:12:48.784 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.784 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.784 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.043 { 00:12:49.043 "cntlid": 141, 00:12:49.043 "qid": 0, 00:12:49.043 "state": "enabled", 00:12:49.043 "thread": "nvmf_tgt_poll_group_000", 00:12:49.043 "listen_address": { 00:12:49.043 "trtype": "TCP", 00:12:49.043 "adrfam": "IPv4", 00:12:49.043 "traddr": "10.0.0.3", 00:12:49.043 "trsvcid": "4420" 00:12:49.043 }, 00:12:49.043 "peer_address": { 00:12:49.043 "trtype": "TCP", 00:12:49.043 "adrfam": "IPv4", 00:12:49.043 "traddr": "10.0.0.1", 00:12:49.043 "trsvcid": "35378" 00:12:49.043 }, 00:12:49.043 "auth": { 00:12:49.043 "state": "completed", 00:12:49.043 "digest": "sha512", 00:12:49.043 "dhgroup": "ffdhe8192" 00:12:49.043 } 00:12:49.043 } 00:12:49.043 ]' 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.043 20:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.302 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:02:NzAzYzBmN2FmOGE2NjMwZDMzYTQzNjA1MTdmYzZmYzFhY2UxZjk0ZDAzYWUxMzU3oBMzow==: --dhchap-ctrl-secret DHHC-1:01:MDRiODllNjQ0OWZkODU4NDc5NGYxMzMyNDAxZDI0MTZh9n20: 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:49.869 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.127 20:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:50.733 00:12:50.733 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.733 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.733 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.992 { 00:12:50.992 "cntlid": 143, 00:12:50.992 "qid": 0, 00:12:50.992 "state": "enabled", 00:12:50.992 "thread": "nvmf_tgt_poll_group_000", 00:12:50.992 "listen_address": { 00:12:50.992 "trtype": "TCP", 00:12:50.992 "adrfam": "IPv4", 00:12:50.992 "traddr": "10.0.0.3", 00:12:50.992 "trsvcid": "4420" 00:12:50.992 }, 00:12:50.992 "peer_address": { 00:12:50.992 "trtype": "TCP", 00:12:50.992 "adrfam": "IPv4", 00:12:50.992 "traddr": "10.0.0.1", 00:12:50.992 "trsvcid": "35398" 00:12:50.992 }, 00:12:50.992 "auth": { 00:12:50.992 "state": "completed", 00:12:50.992 "digest": "sha512", 00:12:50.992 "dhgroup": "ffdhe8192" 00:12:50.992 } 00:12:50.992 } 00:12:50.992 ]' 00:12:50.992 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.250 20:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.509 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:52.076 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:52.335 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:52.335 20:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.335 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.902 00:12:52.902 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.902 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.902 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.470 { 00:12:53.470 "cntlid": 145, 00:12:53.470 "qid": 0, 00:12:53.470 "state": "enabled", 00:12:53.470 "thread": "nvmf_tgt_poll_group_000", 00:12:53.470 "listen_address": { 00:12:53.470 "trtype": "TCP", 00:12:53.470 "adrfam": "IPv4", 00:12:53.470 "traddr": "10.0.0.3", 00:12:53.470 "trsvcid": "4420" 00:12:53.470 }, 00:12:53.470 "peer_address": { 00:12:53.470 "trtype": "TCP", 00:12:53.470 "adrfam": "IPv4", 00:12:53.470 "traddr": "10.0.0.1", 00:12:53.470 "trsvcid": "35430" 00:12:53.470 }, 00:12:53.470 "auth": { 00:12:53.470 "state": "completed", 00:12:53.470 "digest": "sha512", 00:12:53.470 "dhgroup": "ffdhe8192" 00:12:53.470 } 00:12:53.470 } 00:12:53.470 ]' 00:12:53.470 20:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.470 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.729 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:00:NTBiOTNiMzJlOWZhYzQ4YWVkMWIxZDM0MTkxYTg1NmY0NzU2MDU5OWNmMDYwOWMyTQYHHQ==: --dhchap-ctrl-secret DHHC-1:03:NDJlZjE0MWYyZTc2MDY3MzQwYzI0MmQ0YTRhNzRmMWYxMWUwY2YwODNmNzM2NzczNTQ0OGE0OTU3YmZlMmU1YmL9Glg=: 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@646 -- # local es=0 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@634 -- # local arg=hostrpc 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # type -t hostrpc 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:54.296 20:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:54.864 request: 00:12:54.864 { 00:12:54.864 "name": "nvme0", 00:12:54.864 "trtype": "tcp", 00:12:54.864 "traddr": "10.0.0.3", 00:12:54.864 "adrfam": "ipv4", 00:12:54.864 "trsvcid": "4420", 00:12:54.864 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:54.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9", 00:12:54.864 "prchk_reftag": false, 00:12:54.864 "prchk_guard": false, 00:12:54.864 "hdgst": false, 00:12:54.864 "ddgst": false, 00:12:54.864 "dhchap_key": "key2", 00:12:54.864 "method": "bdev_nvme_attach_controller", 00:12:54.864 "req_id": 1 00:12:54.864 } 00:12:54.864 Got JSON-RPC error response 00:12:54.864 response: 00:12:54.864 { 00:12:54.864 "code": -5, 00:12:54.864 "message": "Input/output error" 00:12:54.864 } 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # es=1 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@646 -- # local es=0 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@634 -- # local arg=hostrpc 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # type -t hostrpc 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:54.864 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:55.124 request: 00:12:55.124 { 00:12:55.124 "name": "nvme0", 00:12:55.124 "trtype": "tcp", 00:12:55.124 "traddr": "10.0.0.3", 00:12:55.124 "adrfam": "ipv4", 00:12:55.124 "trsvcid": "4420", 00:12:55.124 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:55.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9", 00:12:55.124 "prchk_reftag": false, 00:12:55.124 "prchk_guard": false, 00:12:55.124 "hdgst": false, 00:12:55.124 "ddgst": false, 00:12:55.124 "dhchap_key": "key1", 00:12:55.124 "dhchap_ctrlr_key": "ckey2", 00:12:55.124 "method": "bdev_nvme_attach_controller", 00:12:55.124 "req_id": 1 00:12:55.124 } 00:12:55.124 Got JSON-RPC error response 00:12:55.124 response: 00:12:55.124 { 00:12:55.124 "code": -5, 00:12:55.124 "message": "Input/output error" 00:12:55.124 } 00:12:55.382 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # es=1 00:12:55.382 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:12:55.382 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:12:55.382 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:12:55.382 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:55.382 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key1 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@646 -- # local es=0 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@634 -- # local arg=hostrpc 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # type -t hostrpc 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.383 20:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.951 request: 00:12:55.951 { 00:12:55.951 "name": "nvme0", 00:12:55.951 "trtype": "tcp", 00:12:55.951 "traddr": "10.0.0.3", 00:12:55.951 "adrfam": "ipv4", 00:12:55.951 "trsvcid": "4420", 00:12:55.951 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:55.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9", 00:12:55.951 "prchk_reftag": false, 00:12:55.951 "prchk_guard": false, 00:12:55.951 "hdgst": false, 00:12:55.951 "ddgst": false, 00:12:55.951 "dhchap_key": "key1", 00:12:55.951 "dhchap_ctrlr_key": "ckey1", 00:12:55.951 "method": "bdev_nvme_attach_controller", 00:12:55.951 "req_id": 1 00:12:55.951 } 00:12:55.951 Got JSON-RPC error response 00:12:55.951 response: 00:12:55.951 { 00:12:55.951 "code": -5, 00:12:55.951 "message": "Input/output error" 00:12:55.951 } 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # es=1 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77743 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 77743 ']' 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 77743 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77743 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.951 killing process with pid 77743 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77743' 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 77743 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 77743 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # nvmfpid=80669 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # waitforlisten 80669 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 80669 ']' 00:12:55.951 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.952 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:55.952 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.952 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:55.952 20:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:57.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 80669 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 80669 ']' 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.329 20:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.329 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.329 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:12:57.329 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:12:57.329 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:57.329 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:57.588 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:58.155 00:12:58.155 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.155 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.155 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.414 { 00:12:58.414 "cntlid": 1, 00:12:58.414 "qid": 0, 00:12:58.414 "state": "enabled", 00:12:58.414 "thread": "nvmf_tgt_poll_group_000", 00:12:58.414 "listen_address": { 00:12:58.414 "trtype": "TCP", 00:12:58.414 "adrfam": "IPv4", 00:12:58.414 "traddr": "10.0.0.3", 00:12:58.414 "trsvcid": "4420" 00:12:58.414 }, 00:12:58.414 "peer_address": { 00:12:58.414 "trtype": "TCP", 00:12:58.414 "adrfam": "IPv4", 00:12:58.414 "traddr": "10.0.0.1", 00:12:58.414 "trsvcid": "52200" 00:12:58.414 }, 00:12:58.414 "auth": { 00:12:58.414 "state": "completed", 00:12:58.414 "digest": "sha512", 00:12:58.414 "dhgroup": "ffdhe8192" 00:12:58.414 } 00:12:58.414 } 00:12:58.414 ]' 00:12:58.414 20:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.414 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.672 20:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid 78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-secret DHHC-1:03:MzQ1ZTJkMDhjN2VjNmNjMDllNjE5MzUxMWQzZmY5MWViMTViYzlmMzQxOWVkZjA0MzAyN2E2NDczYmUxNGU2NeWknHw=: 00:12:59.238 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --dhchap-key key3 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:59.497 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@646 -- # local es=0 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@634 -- # local arg=hostrpc 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # type -t hostrpc 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:59.755 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.014 request: 00:13:00.014 { 00:13:00.014 "name": "nvme0", 00:13:00.014 "trtype": "tcp", 00:13:00.014 "traddr": "10.0.0.3", 00:13:00.014 "adrfam": "ipv4", 00:13:00.014 "trsvcid": "4420", 00:13:00.014 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:00.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9", 00:13:00.014 "prchk_reftag": false, 00:13:00.014 "prchk_guard": false, 00:13:00.014 "hdgst": false, 00:13:00.014 "ddgst": false, 00:13:00.014 "dhchap_key": "key3", 00:13:00.014 "method": "bdev_nvme_attach_controller", 00:13:00.014 "req_id": 1 00:13:00.014 } 00:13:00.014 Got JSON-RPC error response 00:13:00.014 response: 00:13:00.014 { 00:13:00.014 "code": -5, 00:13:00.014 "message": "Input/output error" 00:13:00.014 } 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # es=1 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:00.014 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@646 -- # local es=0 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@634 -- # local arg=hostrpc 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # type -t hostrpc 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.275 20:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.533 request: 00:13:00.533 { 00:13:00.533 "name": "nvme0", 00:13:00.533 "trtype": "tcp", 00:13:00.533 "traddr": "10.0.0.3", 00:13:00.533 "adrfam": "ipv4", 00:13:00.533 "trsvcid": "4420", 00:13:00.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:00.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9", 00:13:00.533 "prchk_reftag": false, 00:13:00.533 "prchk_guard": false, 00:13:00.533 "hdgst": false, 00:13:00.533 "ddgst": false, 00:13:00.533 "dhchap_key": "key3", 00:13:00.533 "method": "bdev_nvme_attach_controller", 00:13:00.533 "req_id": 1 00:13:00.533 } 00:13:00.533 Got JSON-RPC error response 00:13:00.533 response: 00:13:00.533 { 00:13:00.533 "code": -5, 00:13:00.533 "message": "Input/output error" 00:13:00.533 } 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # es=1 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:00.533 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@646 -- # local es=0 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@634 -- # local arg=hostrpc 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # type -t hostrpc 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:00.792 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:01.051 request: 00:13:01.051 { 00:13:01.051 "name": "nvme0", 00:13:01.051 "trtype": "tcp", 00:13:01.051 "traddr": "10.0.0.3", 00:13:01.051 "adrfam": "ipv4", 00:13:01.051 "trsvcid": "4420", 00:13:01.051 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:01.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9", 00:13:01.051 "prchk_reftag": false, 00:13:01.051 "prchk_guard": false, 00:13:01.051 "hdgst": false, 00:13:01.051 "ddgst": false, 00:13:01.051 "dhchap_key": "key0", 00:13:01.051 "dhchap_ctrlr_key": "key1", 00:13:01.051 "method": "bdev_nvme_attach_controller", 00:13:01.051 "req_id": 1 00:13:01.051 } 00:13:01.051 Got JSON-RPC error response 00:13:01.051 response: 00:13:01.051 { 00:13:01.051 "code": -5, 00:13:01.051 "message": "Input/output error" 00:13:01.051 } 00:13:01.051 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@649 -- # es=1 00:13:01.051 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:01.051 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:01.051 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:01.051 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:01.051 20:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:01.309 00:13:01.309 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:01.309 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.309 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:01.568 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.568 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.569 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77762 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 77762 ']' 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 77762 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77762 00:13:01.827 killing process with pid 77762 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:01.827 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:01.828 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77762' 00:13:01.828 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 77762 00:13:01.828 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 77762 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # nvmfcleanup 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:02.394 rmmod nvme_tcp 00:13:02.394 rmmod nvme_fabrics 00:13:02.394 rmmod nvme_keyring 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # '[' -n 80669 ']' 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # killprocess 80669 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 80669 ']' 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 80669 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:02.394 20:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80669 00:13:02.394 killing process with pid 80669 00:13:02.394 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:02.394 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:02.394 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80669' 00:13:02.394 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 80669 00:13:02.394 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 80669 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # iptr 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@783 -- # iptables-save 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@783 -- # iptables-restore 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.652 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # remove_spdk_ns 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # return 0 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.N3a /tmp/spdk.key-sha256.kdb /tmp/spdk.key-sha384.WHj /tmp/spdk.key-sha512.C8l /tmp/spdk.key-sha512.5ad /tmp/spdk.key-sha384.bqa /tmp/spdk.key-sha256.7w4 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:02.911 ************************************ 00:13:02.911 END TEST nvmf_auth_target 00:13:02.911 ************************************ 00:13:02.911 00:13:02.911 real 2m38.141s 00:13:02.911 user 6m20.766s 00:13:02.911 sys 0m25.711s 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:02.911 ************************************ 00:13:02.911 START TEST nvmf_bdevio_no_huge 00:13:02.911 ************************************ 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:02.911 * Looking for test storage... 00:13:02.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # prepare_net_devs 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # local -g is_hw=no 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # remove_spdk_ns 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:13:02.911 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # nvmf_veth_init 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:13:02.912 Cannot find device "nvmf_init_br" 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:13:02.912 Cannot find device "nvmf_init_br2" 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:02.912 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:13:03.170 Cannot find device "nvmf_tgt_br" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:13:03.170 Cannot find device "nvmf_tgt_br2" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:13:03.170 Cannot find device "nvmf_init_br" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:13:03.170 Cannot find device "nvmf_init_br2" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:13:03.170 Cannot find device "nvmf_tgt_br" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:13:03.170 Cannot find device "nvmf_tgt_br2" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:13:03.170 Cannot find device "nvmf_br" 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:03.170 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:13:03.171 Cannot find device "nvmf_init_if" 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:13:03.171 Cannot find device "nvmf_init_if2" 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:03.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:03.171 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:13:03.171 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.429 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:03.429 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.429 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:03.430 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:03.430 20:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:13:03.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:03.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:03.430 00:13:03.430 --- 10.0.0.3 ping statistics --- 00:13:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.430 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:13:03.430 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:03.430 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:13:03.430 00:13:03.430 --- 10.0.0.4 ping statistics --- 00:13:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.430 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:03.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:13:03.430 00:13:03.430 --- 10.0.0.1 ping statistics --- 00:13:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.430 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:03.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:03.430 00:13:03.430 --- 10.0.0.2 ping statistics --- 00:13:03.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.430 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@453 -- # return 0 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@501 -- # nvmfpid=81030 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # waitforlisten 81030 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 81030 ']' 00:13:03.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.430 20:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:03.430 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:03.430 [2024-08-11 20:54:14.116167] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:03.430 [2024-08-11 20:54:14.116273] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:03.688 [2024-08-11 20:54:14.261925] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.688 [2024-08-11 20:54:14.390410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.689 [2024-08-11 20:54:14.390480] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.689 [2024-08-11 20:54:14.390495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.689 [2024-08-11 20:54:14.390505] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.689 [2024-08-11 20:54:14.390515] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.689 [2024-08-11 20:54:14.390694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:03.689 [2024-08-11 20:54:14.391997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:03.689 [2024-08-11 20:54:14.392130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:03.689 [2024-08-11 20:54:14.392149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.689 [2024-08-11 20:54:14.398938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 [2024-08-11 20:54:15.190503] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 Malloc0 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:04.624 [2024-08-11 20:54:15.230891] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@552 -- # config=() 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@552 -- # local subsystem config 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:13:04.624 { 00:13:04.624 "params": { 00:13:04.624 "name": "Nvme$subsystem", 00:13:04.624 "trtype": "$TEST_TRANSPORT", 00:13:04.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:04.624 "adrfam": "ipv4", 00:13:04.624 "trsvcid": "$NVMF_PORT", 00:13:04.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:04.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:04.624 "hdgst": ${hdgst:-false}, 00:13:04.624 "ddgst": ${ddgst:-false} 00:13:04.624 }, 00:13:04.624 "method": "bdev_nvme_attach_controller" 00:13:04.624 } 00:13:04.624 EOF 00:13:04.624 )") 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@574 -- # cat 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@576 -- # jq . 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@577 -- # IFS=, 00:13:04.624 20:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:13:04.624 "params": { 00:13:04.624 "name": "Nvme1", 00:13:04.624 "trtype": "tcp", 00:13:04.624 "traddr": "10.0.0.3", 00:13:04.624 "adrfam": "ipv4", 00:13:04.624 "trsvcid": "4420", 00:13:04.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:04.624 "hdgst": false, 00:13:04.624 "ddgst": false 00:13:04.624 }, 00:13:04.624 "method": "bdev_nvme_attach_controller" 00:13:04.624 }' 00:13:04.624 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:04.624 [2024-08-11 20:54:15.289710] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:04.624 [2024-08-11 20:54:15.289797] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81066 ] 00:13:04.883 [2024-08-11 20:54:15.432187] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.883 [2024-08-11 20:54:15.554670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.883 [2024-08-11 20:54:15.554806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.883 [2024-08-11 20:54:15.555110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.883 [2024-08-11 20:54:15.569627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:05.141 I/O targets: 00:13:05.141 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:05.141 00:13:05.141 00:13:05.141 CUnit - A unit testing framework for C - Version 2.1-3 00:13:05.141 http://cunit.sourceforge.net/ 00:13:05.141 00:13:05.141 00:13:05.141 Suite: bdevio tests on: Nvme1n1 00:13:05.141 Test: blockdev write read block ...passed 00:13:05.141 Test: blockdev write zeroes read block ...passed 00:13:05.141 Test: blockdev write zeroes read no split ...passed 00:13:05.141 Test: blockdev write zeroes read split ...passed 00:13:05.141 Test: blockdev write zeroes read split partial ...passed 00:13:05.141 Test: blockdev reset ...[2024-08-11 20:54:15.774828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:05.141 [2024-08-11 20:54:15.774962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b69a80 (9): Bad file descriptor 00:13:05.141 [2024-08-11 20:54:15.793466] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:05.141 passed 00:13:05.141 Test: blockdev write read 8 blocks ...passed 00:13:05.141 Test: blockdev write read size > 128k ...passed 00:13:05.141 Test: blockdev write read invalid size ...passed 00:13:05.141 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:05.141 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:05.141 Test: blockdev write read max offset ...passed 00:13:05.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:05.141 Test: blockdev writev readv 8 blocks ...passed 00:13:05.141 Test: blockdev writev readv 30 x 1block ...passed 00:13:05.141 Test: blockdev writev readv block ...passed 00:13:05.141 Test: blockdev writev readv size > 128k ...passed 00:13:05.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:05.141 Test: blockdev comparev and writev ...[2024-08-11 20:54:15.802937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.141 [2024-08-11 20:54:15.803134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:05.141 [2024-08-11 20:54:15.803256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.141 [2024-08-11 20:54:15.803353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:05.141 [2024-08-11 20:54:15.803779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.141 [2024-08-11 20:54:15.803918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:05.141 [2024-08-11 20:54:15.804038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.141 [2024-08-11 20:54:15.804121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:05.141 [2024-08-11 20:54:15.804628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.141 [2024-08-11 20:54:15.804741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:05.142 [2024-08-11 20:54:15.804856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.142 [2024-08-11 20:54:15.804955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:05.142 [2024-08-11 20:54:15.805471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.142 [2024-08-11 20:54:15.805574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:05.142 [2024-08-11 20:54:15.805704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:05.142 [2024-08-11 20:54:15.805788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:05.142 passed 00:13:05.142 Test: blockdev nvme passthru rw ...passed 00:13:05.142 Test: blockdev nvme passthru vendor specific ...[2024-08-11 20:54:15.806927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:05.142 [2024-08-11 20:54:15.807068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:05.142 [2024-08-11 20:54:15.807311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:05.142 [2024-08-11 20:54:15.807429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:05.142 [2024-08-11 20:54:15.807639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:05.142 [2024-08-11 20:54:15.807757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:05.142 [2024-08-11 20:54:15.807939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:05.142 [2024-08-11 20:54:15.808125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:05.142 passed 00:13:05.142 Test: blockdev nvme admin passthru ...passed 00:13:05.142 Test: blockdev copy ...passed 00:13:05.142 00:13:05.142 Run Summary: Type Total Ran Passed Failed Inactive 00:13:05.142 suites 1 1 n/a 0 0 00:13:05.142 tests 23 23 23 0 0 00:13:05.142 asserts 152 152 152 0 n/a 00:13:05.142 00:13:05.142 Elapsed time = 0.177 seconds 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@557 -- # xtrace_disable 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # nvmfcleanup 00:13:05.400 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.659 rmmod nvme_tcp 00:13:05.659 rmmod nvme_fabrics 00:13:05.659 rmmod nvme_keyring 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # '[' -n 81030 ']' 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # killprocess 81030 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 81030 ']' 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 81030 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81030 00:13:05.659 killing process with pid 81030 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81030' 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 81030 00:13:05.659 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 81030 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # iptr 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@783 -- # iptables-save 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@783 -- # iptables-restore 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # remove_spdk_ns 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # return 0 00:13:06.227 00:13:06.227 real 0m3.395s 00:13:06.227 user 0m10.426s 00:13:06.227 sys 0m1.373s 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.227 ************************************ 00:13:06.227 END TEST nvmf_bdevio_no_huge 00:13:06.227 ************************************ 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:06.227 ************************************ 00:13:06.227 START TEST nvmf_tls 00:13:06.227 ************************************ 00:13:06.227 20:54:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:06.486 * Looking for test storage... 00:13:06.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # prepare_net_devs 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # local -g is_hw=no 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # remove_spdk_ns 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # nvmf_veth_init 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:13:06.487 Cannot find device "nvmf_init_br" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:13:06.487 Cannot find device "nvmf_init_br2" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:13:06.487 Cannot find device "nvmf_tgt_br" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.487 Cannot find device "nvmf_tgt_br2" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:13:06.487 Cannot find device "nvmf_init_br" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:13:06.487 Cannot find device "nvmf_init_br2" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:13:06.487 Cannot find device "nvmf_tgt_br" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:13:06.487 Cannot find device "nvmf_tgt_br2" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:13:06.487 Cannot find device "nvmf_br" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:13:06.487 Cannot find device "nvmf_init_if" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:13:06.487 Cannot find device "nvmf_init_if2" 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.487 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:06.488 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.488 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:06.488 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:13:06.488 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:06.488 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:06.488 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:06.746 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:13:06.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:06.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:13:06.747 00:13:06.747 --- 10.0.0.3 ping statistics --- 00:13:06.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.747 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:13:06.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:06.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:13:06.747 00:13:06.747 --- 10.0.0.4 ping statistics --- 00:13:06.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.747 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:06.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:13:06.747 00:13:06.747 --- 10.0.0.1 ping statistics --- 00:13:06.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.747 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:06.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:06.747 00:13:06.747 --- 10.0.0.2 ping statistics --- 00:13:06.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.747 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@453 -- # return 0 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:06.747 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.005 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:07.005 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=81294 00:13:07.005 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 81294 00:13:07.005 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81294 ']' 00:13:07.005 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.005 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:07.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.006 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.006 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:07.006 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.006 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:07.006 [2024-08-11 20:54:17.580838] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:07.006 [2024-08-11 20:54:17.580935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.006 [2024-08-11 20:54:17.720899] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.265 [2024-08-11 20:54:17.787928] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.265 [2024-08-11 20:54:17.787988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.265 [2024-08-11 20:54:17.788003] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.265 [2024-08-11 20:54:17.788014] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.265 [2024-08-11 20:54:17.788024] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.265 [2024-08-11 20:54:17.788058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:07.265 20:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:07.524 true 00:13:07.524 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:07.524 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:07.782 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:07.782 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:07.782 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:08.041 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:08.041 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:08.300 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:08.300 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:08.300 20:54:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:08.300 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:08.300 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:08.559 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:08.559 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:08.559 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:08.559 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:08.818 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:08.818 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:08.818 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:09.077 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:09.077 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:09.336 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:09.336 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:09.336 20:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:09.595 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:09.595 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@722 -- # local prefix key digest 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # key=00112233445566778899aabbccddeeff 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # digest=1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@725 -- # python - 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@722 -- # local prefix key digest 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # key=ffeeddccbbaa99887766554433221100 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # digest=1 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@725 -- # python - 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.bEvp2YXQdm 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.SaP6PgEHDC 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.bEvp2YXQdm 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SaP6PgEHDC 00:13:09.854 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:10.176 20:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:10.434 [2024-08-11 20:54:21.051204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:10.434 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.bEvp2YXQdm 00:13:10.434 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bEvp2YXQdm 00:13:10.434 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:10.693 [2024-08-11 20:54:21.346914] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.693 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:10.951 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:11.209 [2024-08-11 20:54:21.827028] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:11.209 [2024-08-11 20:54:21.827243] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:11.209 20:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:11.468 malloc0 00:13:11.468 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:11.727 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bEvp2YXQdm 00:13:11.727 [2024-08-11 20:54:22.426357] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:11.727 20:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bEvp2YXQdm 00:13:11.727 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:23.940 Initializing NVMe Controllers 00:13:23.940 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:23.940 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:23.940 Initialization complete. Launching workers. 00:13:23.940 ======================================================== 00:13:23.940 Latency(us) 00:13:23.940 Device Information : IOPS MiB/s Average min max 00:13:23.940 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11591.30 45.28 5522.34 1567.61 7123.38 00:13:23.940 ======================================================== 00:13:23.940 Total : 11591.30 45.28 5522.34 1567.61 7123.38 00:13:23.940 00:13:23.940 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEvp2YXQdm 00:13:23.940 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:23.940 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:23.940 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:23.940 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bEvp2YXQdm' 00:13:23.940 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81514 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81514 /var/tmp/bdevperf.sock 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81514 ']' 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:23.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:23.941 20:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.941 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:23.941 [2024-08-11 20:54:32.700674] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:23.941 [2024-08-11 20:54:32.700779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81514 ] 00:13:23.941 [2024-08-11 20:54:32.841659] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.941 [2024-08-11 20:54:32.915647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.941 [2024-08-11 20:54:32.971892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:23.941 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:23.941 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:23.941 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bEvp2YXQdm 00:13:23.941 [2024-08-11 20:54:33.844681] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:23.941 [2024-08-11 20:54:33.844776] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:23.941 TLSTESTn1 00:13:23.941 20:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:23.941 Running I/O for 10 seconds... 00:13:33.994 00:13:33.994 Latency(us) 00:13:33.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.994 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:33.994 Verification LBA range: start 0x0 length 0x2000 00:13:33.994 TLSTESTn1 : 10.03 4812.91 18.80 0.00 0.00 26545.67 5659.93 17635.14 00:13:33.994 =================================================================================================================== 00:13:33.994 Total : 4812.91 18.80 0.00 0.00 26545.67 5659.93 17635.14 00:13:33.994 0 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 81514 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81514 ']' 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81514 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81514 00:13:33.994 killing process with pid 81514 00:13:33.994 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.994 00:13:33.994 Latency(us) 00:13:33.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.994 =================================================================================================================== 00:13:33.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81514' 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81514 00:13:33.994 [2024-08-11 20:54:44.101561] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81514 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaP6PgEHDC 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@646 -- # local es=0 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaP6PgEHDC 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@634 -- # local arg=run_bdevperf 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # type -t run_bdevperf 00:13:33.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SaP6PgEHDC 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SaP6PgEHDC' 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81653 00:13:33.994 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81653 /var/tmp/bdevperf.sock 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81653 ']' 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:33.995 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:33.995 [2024-08-11 20:54:44.335945] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:33.995 [2024-08-11 20:54:44.336031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81653 ] 00:13:33.995 [2024-08-11 20:54:44.465709] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.995 [2024-08-11 20:54:44.524045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.995 [2024-08-11 20:54:44.575051] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:33.995 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SaP6PgEHDC 00:13:34.254 [2024-08-11 20:54:44.878421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:34.254 [2024-08-11 20:54:44.878777] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:34.254 [2024-08-11 20:54:44.887262] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:34.254 [2024-08-11 20:54:44.888152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12554c0 (107): Transport endpoint is not connected 00:13:34.254 [2024-08-11 20:54:44.889143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12554c0 (9): Bad file descriptor 00:13:34.254 [2024-08-11 20:54:44.890141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:34.254 [2024-08-11 20:54:44.890344] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:34.254 [2024-08-11 20:54:44.890470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:34.254 request: 00:13:34.254 { 00:13:34.254 "name": "TLSTEST", 00:13:34.254 "trtype": "tcp", 00:13:34.254 "traddr": "10.0.0.3", 00:13:34.254 "adrfam": "ipv4", 00:13:34.254 "trsvcid": "4420", 00:13:34.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:34.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:34.254 "prchk_reftag": false, 00:13:34.254 "prchk_guard": false, 00:13:34.254 "hdgst": false, 00:13:34.254 "ddgst": false, 00:13:34.254 "psk": "/tmp/tmp.SaP6PgEHDC", 00:13:34.254 "method": "bdev_nvme_attach_controller", 00:13:34.254 "req_id": 1 00:13:34.254 } 00:13:34.254 Got JSON-RPC error response 00:13:34.254 response: 00:13:34.254 { 00:13:34.254 "code": -5, 00:13:34.254 "message": "Input/output error" 00:13:34.254 } 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 81653 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81653 ']' 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81653 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81653 00:13:34.254 killing process with pid 81653 00:13:34.254 Received shutdown signal, test time was about 10.000000 seconds 00:13:34.254 00:13:34.254 Latency(us) 00:13:34.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.254 =================================================================================================================== 00:13:34.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81653' 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81653 00:13:34.254 [2024-08-11 20:54:44.943194] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:34.254 20:54:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81653 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # es=1 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bEvp2YXQdm 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@646 -- # local es=0 00:13:34.513 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bEvp2YXQdm 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@634 -- # local arg=run_bdevperf 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # type -t run_bdevperf 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bEvp2YXQdm 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bEvp2YXQdm' 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81668 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81668 /var/tmp/bdevperf.sock 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81668 ']' 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:34.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:34.514 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.514 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:34.514 [2024-08-11 20:54:45.201059] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:34.514 [2024-08-11 20:54:45.201151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81668 ] 00:13:34.773 [2024-08-11 20:54:45.337813] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.773 [2024-08-11 20:54:45.398593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.773 [2024-08-11 20:54:45.449207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.773 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.773 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:34.773 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.bEvp2YXQdm 00:13:35.033 [2024-08-11 20:54:45.700515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.033 [2024-08-11 20:54:45.700858] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:35.033 [2024-08-11 20:54:45.707291] tcp.c: 946:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:35.033 [2024-08-11 20:54:45.707513] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:35.033 [2024-08-11 20:54:45.707798] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:35.033 [2024-08-11 20:54:45.708275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fa4c0 (107): Transport endpoint is not connected 00:13:35.033 [2024-08-11 20:54:45.709253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fa4c0 (9): Bad file descriptor 00:13:35.033 [2024-08-11 20:54:45.710250] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:35.033 [2024-08-11 20:54:45.710431] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:35.033 [2024-08-11 20:54:45.710546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:35.033 request: 00:13:35.033 { 00:13:35.033 "name": "TLSTEST", 00:13:35.033 "trtype": "tcp", 00:13:35.033 "traddr": "10.0.0.3", 00:13:35.033 "adrfam": "ipv4", 00:13:35.033 "trsvcid": "4420", 00:13:35.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:35.033 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:35.033 "prchk_reftag": false, 00:13:35.033 "prchk_guard": false, 00:13:35.033 "hdgst": false, 00:13:35.033 "ddgst": false, 00:13:35.033 "psk": "/tmp/tmp.bEvp2YXQdm", 00:13:35.033 "method": "bdev_nvme_attach_controller", 00:13:35.033 "req_id": 1 00:13:35.033 } 00:13:35.033 Got JSON-RPC error response 00:13:35.033 response: 00:13:35.033 { 00:13:35.033 "code": -5, 00:13:35.033 "message": "Input/output error" 00:13:35.033 } 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 81668 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81668 ']' 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81668 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81668 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:35.033 killing process with pid 81668 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81668' 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81668 00:13:35.033 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.033 00:13:35.033 Latency(us) 00:13:35.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.033 =================================================================================================================== 00:13:35.033 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.033 [2024-08-11 20:54:45.765717] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:35.033 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81668 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # es=1 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEvp2YXQdm 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@646 -- # local es=0 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEvp2YXQdm 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@634 -- # local arg=run_bdevperf 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # type -t run_bdevperf 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bEvp2YXQdm 00:13:35.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bEvp2YXQdm' 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81682 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81682 /var/tmp/bdevperf.sock 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81682 ']' 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:35.293 20:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.293 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:35.293 [2024-08-11 20:54:45.987618] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:35.293 [2024-08-11 20:54:45.987713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81682 ] 00:13:35.552 [2024-08-11 20:54:46.117794] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.552 [2024-08-11 20:54:46.179703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.552 [2024-08-11 20:54:46.230140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:35.552 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.552 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:35.552 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bEvp2YXQdm 00:13:35.812 [2024-08-11 20:54:46.484948] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:35.812 [2024-08-11 20:54:46.485059] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:35.812 [2024-08-11 20:54:46.489442] tcp.c: 946:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:35.812 [2024-08-11 20:54:46.489480] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:35.812 [2024-08-11 20:54:46.489541] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:35.812 [2024-08-11 20:54:46.490214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10754c0 (107): Transport endpoint is not connected 00:13:35.812 [2024-08-11 20:54:46.491203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10754c0 (9): Bad file descriptor 00:13:35.812 [2024-08-11 20:54:46.492198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:35.812 [2024-08-11 20:54:46.492235] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:35.812 [2024-08-11 20:54:46.492280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:35.812 request: 00:13:35.812 { 00:13:35.812 "name": "TLSTEST", 00:13:35.812 "trtype": "tcp", 00:13:35.812 "traddr": "10.0.0.3", 00:13:35.812 "adrfam": "ipv4", 00:13:35.812 "trsvcid": "4420", 00:13:35.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:35.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:35.812 "prchk_reftag": false, 00:13:35.812 "prchk_guard": false, 00:13:35.812 "hdgst": false, 00:13:35.812 "ddgst": false, 00:13:35.812 "psk": "/tmp/tmp.bEvp2YXQdm", 00:13:35.812 "method": "bdev_nvme_attach_controller", 00:13:35.812 "req_id": 1 00:13:35.812 } 00:13:35.812 Got JSON-RPC error response 00:13:35.812 response: 00:13:35.812 { 00:13:35.812 "code": -5, 00:13:35.812 "message": "Input/output error" 00:13:35.812 } 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 81682 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81682 ']' 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81682 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81682 00:13:35.812 killing process with pid 81682 00:13:35.812 Received shutdown signal, test time was about 10.000000 seconds 00:13:35.812 00:13:35.812 Latency(us) 00:13:35.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.812 =================================================================================================================== 00:13:35.812 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81682' 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81682 00:13:35.812 [2024-08-11 20:54:46.539904] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:35.812 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81682 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # es=1 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@646 -- # local es=0 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@634 -- # local arg=run_bdevperf 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # type -t run_bdevperf 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81702 00:13:36.071 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81702 /var/tmp/bdevperf.sock 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81702 ']' 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:36.072 20:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.072 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:36.072 [2024-08-11 20:54:46.757877] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:36.072 [2024-08-11 20:54:46.758122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81702 ] 00:13:36.331 [2024-08-11 20:54:46.886716] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.331 [2024-08-11 20:54:46.944860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.331 [2024-08-11 20:54:46.996499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.331 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:36.331 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:36.331 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:36.594 [2024-08-11 20:54:47.307492] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:36.594 [2024-08-11 20:54:47.309024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1769e10 (9): Bad file descriptor 00:13:36.594 [2024-08-11 20:54:47.310021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:36.594 [2024-08-11 20:54:47.310201] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:36.594 [2024-08-11 20:54:47.310238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:36.594 request: 00:13:36.594 { 00:13:36.594 "name": "TLSTEST", 00:13:36.594 "trtype": "tcp", 00:13:36.594 "traddr": "10.0.0.3", 00:13:36.594 "adrfam": "ipv4", 00:13:36.594 "trsvcid": "4420", 00:13:36.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:36.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:36.594 "prchk_reftag": false, 00:13:36.594 "prchk_guard": false, 00:13:36.594 "hdgst": false, 00:13:36.594 "ddgst": false, 00:13:36.594 "method": "bdev_nvme_attach_controller", 00:13:36.594 "req_id": 1 00:13:36.595 } 00:13:36.595 Got JSON-RPC error response 00:13:36.595 response: 00:13:36.595 { 00:13:36.595 "code": -5, 00:13:36.595 "message": "Input/output error" 00:13:36.595 } 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 81702 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81702 ']' 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81702 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81702 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:36.595 killing process with pid 81702 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81702' 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81702 00:13:36.595 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.595 00:13:36.595 Latency(us) 00:13:36.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.595 =================================================================================================================== 00:13:36.595 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:36.595 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81702 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # es=1 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 81294 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81294 ']' 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81294 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81294 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:36.876 killing process with pid 81294 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81294' 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81294 00:13:36.876 [2024-08-11 20:54:47.569273] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:36.876 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81294 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@722 -- # local prefix key digest 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@724 -- # digest=2 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@725 -- # python - 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Mvcj9h1dED 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Mvcj9h1dED 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=81732 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 81732 00:13:37.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81732 ']' 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:37.136 20:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.136 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:37.136 [2024-08-11 20:54:47.887290] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:37.136 [2024-08-11 20:54:47.887519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.395 [2024-08-11 20:54:48.028129] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.395 [2024-08-11 20:54:48.085662] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.395 [2024-08-11 20:54:48.085712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.395 [2024-08-11 20:54:48.085738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.395 [2024-08-11 20:54:48.085746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.395 [2024-08-11 20:54:48.085752] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.395 [2024-08-11 20:54:48.085779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.395 [2024-08-11 20:54:48.137711] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Mvcj9h1dED 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Mvcj9h1dED 00:13:37.654 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:37.654 [2024-08-11 20:54:48.431250] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.913 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:38.172 20:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:38.431 [2024-08-11 20:54:48.987345] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.431 [2024-08-11 20:54:48.987547] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:38.431 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:38.431 malloc0 00:13:38.690 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:38.690 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:13:38.949 [2024-08-11 20:54:49.603661] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mvcj9h1dED 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Mvcj9h1dED' 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81774 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81774 /var/tmp/bdevperf.sock 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81774 ']' 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.949 20:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.949 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:38.949 [2024-08-11 20:54:49.663865] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:38.949 [2024-08-11 20:54:49.663944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81774 ] 00:13:39.207 [2024-08-11 20:54:49.796695] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.207 [2024-08-11 20:54:49.869168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.207 [2024-08-11 20:54:49.926388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:40.144 20:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.144 20:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:40.144 20:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:13:40.144 [2024-08-11 20:54:50.848324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.144 [2024-08-11 20:54:50.848691] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:40.144 TLSTESTn1 00:13:40.403 20:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:40.403 Running I/O for 10 seconds... 00:13:50.376 00:13:50.376 Latency(us) 00:13:50.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.376 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:50.376 Verification LBA range: start 0x0 length 0x2000 00:13:50.376 TLSTESTn1 : 10.03 4632.87 18.10 0.00 0.00 27574.53 5928.03 21090.68 00:13:50.376 =================================================================================================================== 00:13:50.376 Total : 4632.87 18.10 0.00 0.00 27574.53 5928.03 21090.68 00:13:50.376 0 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 81774 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81774 ']' 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81774 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81774 00:13:50.376 killing process with pid 81774 00:13:50.376 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.376 00:13:50.376 Latency(us) 00:13:50.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.376 =================================================================================================================== 00:13:50.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81774' 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81774 00:13:50.376 [2024-08-11 20:55:01.136458] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.376 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81774 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Mvcj9h1dED 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mvcj9h1dED 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@646 -- # local es=0 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mvcj9h1dED 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@634 -- # local arg=run_bdevperf 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # type -t run_bdevperf 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Mvcj9h1dED 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Mvcj9h1dED' 00:13:50.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=81908 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 81908 /var/tmp/bdevperf.sock 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81908 ']' 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.634 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.634 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:50.634 [2024-08-11 20:55:01.380212] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:50.634 [2024-08-11 20:55:01.380452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81908 ] 00:13:50.893 [2024-08-11 20:55:01.517910] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.893 [2024-08-11 20:55:01.573510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.893 [2024-08-11 20:55:01.624184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.152 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:51.152 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:51.152 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:13:51.152 [2024-08-11 20:55:01.919767] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.152 [2024-08-11 20:55:01.920018] bdev_nvme.c:6142:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:51.152 [2024-08-11 20:55:01.920031] bdev_nvme.c:6247:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Mvcj9h1dED 00:13:51.152 request: 00:13:51.152 { 00:13:51.152 "name": "TLSTEST", 00:13:51.152 "trtype": "tcp", 00:13:51.152 "traddr": "10.0.0.3", 00:13:51.152 "adrfam": "ipv4", 00:13:51.152 "trsvcid": "4420", 00:13:51.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.152 "prchk_reftag": false, 00:13:51.152 "prchk_guard": false, 00:13:51.152 "hdgst": false, 00:13:51.152 "ddgst": false, 00:13:51.152 "psk": "/tmp/tmp.Mvcj9h1dED", 00:13:51.152 "method": "bdev_nvme_attach_controller", 00:13:51.152 "req_id": 1 00:13:51.152 } 00:13:51.152 Got JSON-RPC error response 00:13:51.152 response: 00:13:51.152 { 00:13:51.152 "code": -1, 00:13:51.152 "message": "Operation not permitted" 00:13:51.152 } 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 81908 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81908 ']' 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81908 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81908 00:13:51.410 killing process with pid 81908 00:13:51.410 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.410 00:13:51.410 Latency(us) 00:13:51.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.410 =================================================================================================================== 00:13:51.410 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81908' 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81908 00:13:51.410 20:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81908 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # es=1 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 81732 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81732 ']' 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81732 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:51.410 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81732 00:13:51.669 killing process with pid 81732 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81732' 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81732 00:13:51.669 [2024-08-11 20:55:02.207105] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81732 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=81933 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 81933 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81933 ']' 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:51.669 20:55:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.928 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:51.928 [2024-08-11 20:55:02.461344] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:51.928 [2024-08-11 20:55:02.461571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.928 [2024-08-11 20:55:02.598165] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.928 [2024-08-11 20:55:02.658245] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.928 [2024-08-11 20:55:02.658333] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.928 [2024-08-11 20:55:02.658361] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.928 [2024-08-11 20:55:02.658369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.928 [2024-08-11 20:55:02.658376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.928 [2024-08-11 20:55:02.658404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.187 [2024-08-11 20:55:02.713572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Mvcj9h1dED 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@646 -- # local es=0 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Mvcj9h1dED 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@634 -- # local arg=setup_nvmf_tgt 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # type -t setup_nvmf_tgt 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # setup_nvmf_tgt /tmp/tmp.Mvcj9h1dED 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Mvcj9h1dED 00:13:52.754 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:53.011 [2024-08-11 20:55:03.742179] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.011 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:53.270 20:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:53.528 [2024-08-11 20:55:04.214249] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:53.528 [2024-08-11 20:55:04.214653] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.528 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:53.787 malloc0 00:13:53.787 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:54.046 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:13:54.304 [2024-08-11 20:55:04.841754] tcp.c:3676:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:54.304 [2024-08-11 20:55:04.841997] tcp.c:3762:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:54.304 [2024-08-11 20:55:04.842037] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:54.304 request: 00:13:54.304 { 00:13:54.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.305 "host": "nqn.2016-06.io.spdk:host1", 00:13:54.305 "psk": "/tmp/tmp.Mvcj9h1dED", 00:13:54.305 "method": "nvmf_subsystem_add_host", 00:13:54.305 "req_id": 1 00:13:54.305 } 00:13:54.305 Got JSON-RPC error response 00:13:54.305 response: 00:13:54.305 { 00:13:54.305 "code": -32603, 00:13:54.305 "message": "Internal error" 00:13:54.305 } 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@649 -- # es=1 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 81933 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81933 ']' 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81933 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81933 00:13:54.305 killing process with pid 81933 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81933' 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81933 00:13:54.305 20:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81933 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Mvcj9h1dED 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=81996 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 81996 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 81996 ']' 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:54.564 20:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.564 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:54.564 [2024-08-11 20:55:05.157634] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:54.564 [2024-08-11 20:55:05.157728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.564 [2024-08-11 20:55:05.290986] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.822 [2024-08-11 20:55:05.351399] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.822 [2024-08-11 20:55:05.351457] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.822 [2024-08-11 20:55:05.351468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.822 [2024-08-11 20:55:05.351475] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.823 [2024-08-11 20:55:05.351481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.823 [2024-08-11 20:55:05.351507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.823 [2024-08-11 20:55:05.402441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Mvcj9h1dED 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Mvcj9h1dED 00:13:55.390 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:55.649 [2024-08-11 20:55:06.354533] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.649 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:55.908 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:56.167 [2024-08-11 20:55:06.834631] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:56.167 [2024-08-11 20:55:06.834862] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:56.167 20:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:56.426 malloc0 00:13:56.426 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:56.685 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:13:56.945 [2024-08-11 20:55:07.509026] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=82045 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 82045 /var/tmp/bdevperf.sock 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82045 ']' 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:56.945 20:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.945 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:56.945 [2024-08-11 20:55:07.580148] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:56.945 [2024-08-11 20:55:07.580425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82045 ] 00:13:56.945 [2024-08-11 20:55:07.718500] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.204 [2024-08-11 20:55:07.793099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.204 [2024-08-11 20:55:07.844889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.772 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:57.772 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:57.772 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:13:58.031 [2024-08-11 20:55:08.704400] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.032 [2024-08-11 20:55:08.704509] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:58.032 TLSTESTn1 00:13:58.032 20:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:58.600 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:13:58.600 "subsystems": [ 00:13:58.600 { 00:13:58.600 "subsystem": "keyring", 00:13:58.600 "config": [] 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "subsystem": "iobuf", 00:13:58.600 "config": [ 00:13:58.600 { 00:13:58.600 "method": "iobuf_set_options", 00:13:58.600 "params": { 00:13:58.600 "small_pool_count": 8192, 00:13:58.600 "large_pool_count": 1024, 00:13:58.600 "small_bufsize": 8192, 00:13:58.600 "large_bufsize": 135168 00:13:58.600 } 00:13:58.600 } 00:13:58.600 ] 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "subsystem": "sock", 00:13:58.600 "config": [ 00:13:58.600 { 00:13:58.600 "method": "sock_set_default_impl", 00:13:58.600 "params": { 00:13:58.600 "impl_name": "uring" 00:13:58.600 } 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "method": "sock_impl_set_options", 00:13:58.600 "params": { 00:13:58.600 "impl_name": "ssl", 00:13:58.600 "recv_buf_size": 4096, 00:13:58.600 "send_buf_size": 4096, 00:13:58.600 "enable_recv_pipe": true, 00:13:58.600 "enable_quickack": false, 00:13:58.600 "enable_placement_id": 0, 00:13:58.600 "enable_zerocopy_send_server": true, 00:13:58.600 "enable_zerocopy_send_client": false, 00:13:58.600 "zerocopy_threshold": 0, 00:13:58.600 "tls_version": 0, 00:13:58.600 "enable_ktls": false 00:13:58.600 } 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "method": "sock_impl_set_options", 00:13:58.600 "params": { 00:13:58.600 "impl_name": "posix", 00:13:58.600 "recv_buf_size": 2097152, 00:13:58.600 "send_buf_size": 2097152, 00:13:58.600 "enable_recv_pipe": true, 00:13:58.600 "enable_quickack": false, 00:13:58.600 "enable_placement_id": 0, 00:13:58.600 "enable_zerocopy_send_server": true, 00:13:58.600 "enable_zerocopy_send_client": false, 00:13:58.600 "zerocopy_threshold": 0, 00:13:58.600 "tls_version": 0, 00:13:58.600 "enable_ktls": false 00:13:58.600 } 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "method": "sock_impl_set_options", 00:13:58.600 "params": { 00:13:58.600 "impl_name": "uring", 00:13:58.600 "recv_buf_size": 2097152, 00:13:58.600 "send_buf_size": 2097152, 00:13:58.600 "enable_recv_pipe": true, 00:13:58.600 "enable_quickack": false, 00:13:58.600 "enable_placement_id": 0, 00:13:58.600 "enable_zerocopy_send_server": false, 00:13:58.600 "enable_zerocopy_send_client": false, 00:13:58.600 "zerocopy_threshold": 0, 00:13:58.600 "tls_version": 0, 00:13:58.600 "enable_ktls": false 00:13:58.600 } 00:13:58.600 } 00:13:58.600 ] 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "subsystem": "vmd", 00:13:58.600 "config": [] 00:13:58.600 }, 00:13:58.600 { 00:13:58.600 "subsystem": "accel", 00:13:58.600 "config": [ 00:13:58.600 { 00:13:58.600 "method": "accel_set_options", 00:13:58.600 "params": { 00:13:58.600 "small_cache_size": 128, 00:13:58.600 "large_cache_size": 16, 00:13:58.600 "task_count": 2048, 00:13:58.600 "sequence_count": 2048, 00:13:58.600 "buf_count": 2048 00:13:58.600 } 00:13:58.600 } 00:13:58.600 ] 00:13:58.600 }, 00:13:58.601 { 00:13:58.601 "subsystem": "bdev", 00:13:58.601 "config": [ 00:13:58.601 { 00:13:58.601 "method": "bdev_set_options", 00:13:58.601 "params": { 00:13:58.601 "bdev_io_pool_size": 65535, 00:13:58.601 "bdev_io_cache_size": 256, 00:13:58.601 "bdev_auto_examine": true, 00:13:58.601 "iobuf_small_cache_size": 128, 00:13:58.601 "iobuf_large_cache_size": 16 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "bdev_raid_set_options", 00:13:58.601 "params": { 00:13:58.601 "process_window_size_kb": 1024, 00:13:58.601 "process_max_bandwidth_mb_sec": 0 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "bdev_iscsi_set_options", 00:13:58.601 "params": { 00:13:58.601 "timeout_sec": 30 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "bdev_nvme_set_options", 00:13:58.601 "params": { 00:13:58.601 "action_on_timeout": "none", 00:13:58.601 "timeout_us": 0, 00:13:58.601 "timeout_admin_us": 0, 00:13:58.601 "keep_alive_timeout_ms": 10000, 00:13:58.601 "arbitration_burst": 0, 00:13:58.601 "low_priority_weight": 0, 00:13:58.601 "medium_priority_weight": 0, 00:13:58.601 "high_priority_weight": 0, 00:13:58.601 "nvme_adminq_poll_period_us": 10000, 00:13:58.601 "nvme_ioq_poll_period_us": 0, 00:13:58.601 "io_queue_requests": 0, 00:13:58.601 "delay_cmd_submit": true, 00:13:58.601 "transport_retry_count": 4, 00:13:58.601 "bdev_retry_count": 3, 00:13:58.601 "transport_ack_timeout": 0, 00:13:58.601 "ctrlr_loss_timeout_sec": 0, 00:13:58.601 "reconnect_delay_sec": 0, 00:13:58.601 "fast_io_fail_timeout_sec": 0, 00:13:58.601 "disable_auto_failback": false, 00:13:58.601 "generate_uuids": false, 00:13:58.601 "transport_tos": 0, 00:13:58.601 "nvme_error_stat": false, 00:13:58.601 "rdma_srq_size": 0, 00:13:58.601 "io_path_stat": false, 00:13:58.601 "allow_accel_sequence": false, 00:13:58.601 "rdma_max_cq_size": 0, 00:13:58.601 "rdma_cm_event_timeout_ms": 0, 00:13:58.601 "dhchap_digests": [ 00:13:58.601 "sha256", 00:13:58.601 "sha384", 00:13:58.601 "sha512" 00:13:58.601 ], 00:13:58.601 "dhchap_dhgroups": [ 00:13:58.601 "null", 00:13:58.601 "ffdhe2048", 00:13:58.601 "ffdhe3072", 00:13:58.601 "ffdhe4096", 00:13:58.601 "ffdhe6144", 00:13:58.601 "ffdhe8192" 00:13:58.601 ] 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "bdev_nvme_set_hotplug", 00:13:58.601 "params": { 00:13:58.601 "period_us": 100000, 00:13:58.601 "enable": false 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "bdev_malloc_create", 00:13:58.601 "params": { 00:13:58.601 "name": "malloc0", 00:13:58.601 "num_blocks": 8192, 00:13:58.601 "block_size": 4096, 00:13:58.601 "physical_block_size": 4096, 00:13:58.601 "uuid": "ebc5936d-c4aa-4e18-901e-6533da881821", 00:13:58.601 "optimal_io_boundary": 0, 00:13:58.601 "md_size": 0, 00:13:58.601 "dif_type": 0, 00:13:58.601 "dif_is_head_of_md": false, 00:13:58.601 "dif_pi_format": 0 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "bdev_wait_for_examine" 00:13:58.601 } 00:13:58.601 ] 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "subsystem": "nbd", 00:13:58.601 "config": [] 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "subsystem": "scheduler", 00:13:58.601 "config": [ 00:13:58.601 { 00:13:58.601 "method": "framework_set_scheduler", 00:13:58.601 "params": { 00:13:58.601 "name": "static" 00:13:58.601 } 00:13:58.601 } 00:13:58.601 ] 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "subsystem": "nvmf", 00:13:58.601 "config": [ 00:13:58.601 { 00:13:58.601 "method": "nvmf_set_config", 00:13:58.601 "params": { 00:13:58.601 "discovery_filter": "match_any", 00:13:58.601 "admin_cmd_passthru": { 00:13:58.601 "identify_ctrlr": false 00:13:58.601 } 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_set_max_subsystems", 00:13:58.601 "params": { 00:13:58.601 "max_subsystems": 1024 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_set_crdt", 00:13:58.601 "params": { 00:13:58.601 "crdt1": 0, 00:13:58.601 "crdt2": 0, 00:13:58.601 "crdt3": 0 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_create_transport", 00:13:58.601 "params": { 00:13:58.601 "trtype": "TCP", 00:13:58.601 "max_queue_depth": 128, 00:13:58.601 "max_io_qpairs_per_ctrlr": 127, 00:13:58.601 "in_capsule_data_size": 4096, 00:13:58.601 "max_io_size": 131072, 00:13:58.601 "io_unit_size": 131072, 00:13:58.601 "max_aq_depth": 128, 00:13:58.601 "num_shared_buffers": 511, 00:13:58.601 "buf_cache_size": 4294967295, 00:13:58.601 "dif_insert_or_strip": false, 00:13:58.601 "zcopy": false, 00:13:58.601 "c2h_success": false, 00:13:58.601 "sock_priority": 0, 00:13:58.601 "abort_timeout_sec": 1, 00:13:58.601 "ack_timeout": 0, 00:13:58.601 "data_wr_pool_size": 0 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_create_subsystem", 00:13:58.601 "params": { 00:13:58.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.601 "allow_any_host": false, 00:13:58.601 "serial_number": "SPDK00000000000001", 00:13:58.601 "model_number": "SPDK bdev Controller", 00:13:58.601 "max_namespaces": 10, 00:13:58.601 "min_cntlid": 1, 00:13:58.601 "max_cntlid": 65519, 00:13:58.601 "ana_reporting": false 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_subsystem_add_host", 00:13:58.601 "params": { 00:13:58.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.601 "host": "nqn.2016-06.io.spdk:host1", 00:13:58.601 "psk": "/tmp/tmp.Mvcj9h1dED" 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_subsystem_add_ns", 00:13:58.601 "params": { 00:13:58.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.601 "namespace": { 00:13:58.601 "nsid": 1, 00:13:58.601 "bdev_name": "malloc0", 00:13:58.601 "nguid": "EBC5936DC4AA4E18901E6533DA881821", 00:13:58.601 "uuid": "ebc5936d-c4aa-4e18-901e-6533da881821", 00:13:58.601 "no_auto_visible": false 00:13:58.601 } 00:13:58.601 } 00:13:58.601 }, 00:13:58.601 { 00:13:58.601 "method": "nvmf_subsystem_add_listener", 00:13:58.601 "params": { 00:13:58.601 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.601 "listen_address": { 00:13:58.601 "trtype": "TCP", 00:13:58.601 "adrfam": "IPv4", 00:13:58.601 "traddr": "10.0.0.3", 00:13:58.601 "trsvcid": "4420" 00:13:58.601 }, 00:13:58.601 "secure_channel": true 00:13:58.601 } 00:13:58.601 } 00:13:58.601 ] 00:13:58.601 } 00:13:58.601 ] 00:13:58.601 }' 00:13:58.601 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:58.861 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:13:58.861 "subsystems": [ 00:13:58.861 { 00:13:58.861 "subsystem": "keyring", 00:13:58.861 "config": [] 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "subsystem": "iobuf", 00:13:58.861 "config": [ 00:13:58.861 { 00:13:58.861 "method": "iobuf_set_options", 00:13:58.861 "params": { 00:13:58.861 "small_pool_count": 8192, 00:13:58.861 "large_pool_count": 1024, 00:13:58.861 "small_bufsize": 8192, 00:13:58.861 "large_bufsize": 135168 00:13:58.861 } 00:13:58.861 } 00:13:58.861 ] 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "subsystem": "sock", 00:13:58.861 "config": [ 00:13:58.861 { 00:13:58.861 "method": "sock_set_default_impl", 00:13:58.861 "params": { 00:13:58.861 "impl_name": "uring" 00:13:58.861 } 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "method": "sock_impl_set_options", 00:13:58.861 "params": { 00:13:58.861 "impl_name": "ssl", 00:13:58.861 "recv_buf_size": 4096, 00:13:58.861 "send_buf_size": 4096, 00:13:58.861 "enable_recv_pipe": true, 00:13:58.861 "enable_quickack": false, 00:13:58.861 "enable_placement_id": 0, 00:13:58.861 "enable_zerocopy_send_server": true, 00:13:58.861 "enable_zerocopy_send_client": false, 00:13:58.861 "zerocopy_threshold": 0, 00:13:58.861 "tls_version": 0, 00:13:58.861 "enable_ktls": false 00:13:58.861 } 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "method": "sock_impl_set_options", 00:13:58.861 "params": { 00:13:58.861 "impl_name": "posix", 00:13:58.861 "recv_buf_size": 2097152, 00:13:58.861 "send_buf_size": 2097152, 00:13:58.861 "enable_recv_pipe": true, 00:13:58.861 "enable_quickack": false, 00:13:58.861 "enable_placement_id": 0, 00:13:58.861 "enable_zerocopy_send_server": true, 00:13:58.861 "enable_zerocopy_send_client": false, 00:13:58.861 "zerocopy_threshold": 0, 00:13:58.861 "tls_version": 0, 00:13:58.861 "enable_ktls": false 00:13:58.861 } 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "method": "sock_impl_set_options", 00:13:58.861 "params": { 00:13:58.861 "impl_name": "uring", 00:13:58.861 "recv_buf_size": 2097152, 00:13:58.861 "send_buf_size": 2097152, 00:13:58.861 "enable_recv_pipe": true, 00:13:58.861 "enable_quickack": false, 00:13:58.861 "enable_placement_id": 0, 00:13:58.861 "enable_zerocopy_send_server": false, 00:13:58.861 "enable_zerocopy_send_client": false, 00:13:58.861 "zerocopy_threshold": 0, 00:13:58.861 "tls_version": 0, 00:13:58.861 "enable_ktls": false 00:13:58.861 } 00:13:58.861 } 00:13:58.861 ] 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "subsystem": "vmd", 00:13:58.861 "config": [] 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "subsystem": "accel", 00:13:58.861 "config": [ 00:13:58.861 { 00:13:58.861 "method": "accel_set_options", 00:13:58.861 "params": { 00:13:58.861 "small_cache_size": 128, 00:13:58.861 "large_cache_size": 16, 00:13:58.861 "task_count": 2048, 00:13:58.861 "sequence_count": 2048, 00:13:58.861 "buf_count": 2048 00:13:58.861 } 00:13:58.861 } 00:13:58.861 ] 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "subsystem": "bdev", 00:13:58.861 "config": [ 00:13:58.861 { 00:13:58.861 "method": "bdev_set_options", 00:13:58.861 "params": { 00:13:58.861 "bdev_io_pool_size": 65535, 00:13:58.861 "bdev_io_cache_size": 256, 00:13:58.861 "bdev_auto_examine": true, 00:13:58.861 "iobuf_small_cache_size": 128, 00:13:58.861 "iobuf_large_cache_size": 16 00:13:58.861 } 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "method": "bdev_raid_set_options", 00:13:58.861 "params": { 00:13:58.861 "process_window_size_kb": 1024, 00:13:58.861 "process_max_bandwidth_mb_sec": 0 00:13:58.861 } 00:13:58.861 }, 00:13:58.861 { 00:13:58.861 "method": "bdev_iscsi_set_options", 00:13:58.861 "params": { 00:13:58.861 "timeout_sec": 30 00:13:58.862 } 00:13:58.862 }, 00:13:58.862 { 00:13:58.862 "method": "bdev_nvme_set_options", 00:13:58.862 "params": { 00:13:58.862 "action_on_timeout": "none", 00:13:58.862 "timeout_us": 0, 00:13:58.862 "timeout_admin_us": 0, 00:13:58.862 "keep_alive_timeout_ms": 10000, 00:13:58.862 "arbitration_burst": 0, 00:13:58.862 "low_priority_weight": 0, 00:13:58.862 "medium_priority_weight": 0, 00:13:58.862 "high_priority_weight": 0, 00:13:58.862 "nvme_adminq_poll_period_us": 10000, 00:13:58.862 "nvme_ioq_poll_period_us": 0, 00:13:58.862 "io_queue_requests": 512, 00:13:58.862 "delay_cmd_submit": true, 00:13:58.862 "transport_retry_count": 4, 00:13:58.862 "bdev_retry_count": 3, 00:13:58.862 "transport_ack_timeout": 0, 00:13:58.862 "ctrlr_loss_timeout_sec": 0, 00:13:58.862 "reconnect_delay_sec": 0, 00:13:58.862 "fast_io_fail_timeout_sec": 0, 00:13:58.862 "disable_auto_failback": false, 00:13:58.862 "generate_uuids": false, 00:13:58.862 "transport_tos": 0, 00:13:58.862 "nvme_error_stat": false, 00:13:58.862 "rdma_srq_size": 0, 00:13:58.862 "io_path_stat": false, 00:13:58.862 "allow_accel_sequence": false, 00:13:58.862 "rdma_max_cq_size": 0, 00:13:58.862 "rdma_cm_event_timeout_ms": 0, 00:13:58.862 "dhchap_digests": [ 00:13:58.862 "sha256", 00:13:58.862 "sha384", 00:13:58.862 "sha512" 00:13:58.862 ], 00:13:58.862 "dhchap_dhgroups": [ 00:13:58.862 "null", 00:13:58.862 "ffdhe2048", 00:13:58.862 "ffdhe3072", 00:13:58.862 "ffdhe4096", 00:13:58.862 "ffdhe6144", 00:13:58.862 "ffdhe8192" 00:13:58.862 ] 00:13:58.862 } 00:13:58.862 }, 00:13:58.862 { 00:13:58.862 "method": "bdev_nvme_attach_controller", 00:13:58.862 "params": { 00:13:58.862 "name": "TLSTEST", 00:13:58.862 "trtype": "TCP", 00:13:58.862 "adrfam": "IPv4", 00:13:58.862 "traddr": "10.0.0.3", 00:13:58.862 "trsvcid": "4420", 00:13:58.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.862 "prchk_reftag": false, 00:13:58.862 "prchk_guard": false, 00:13:58.862 "ctrlr_loss_timeout_sec": 0, 00:13:58.862 "reconnect_delay_sec": 0, 00:13:58.862 "fast_io_fail_timeout_sec": 0, 00:13:58.862 "psk": "/tmp/tmp.Mvcj9h1dED", 00:13:58.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:58.862 "hdgst": false, 00:13:58.862 "ddgst": false 00:13:58.862 } 00:13:58.862 }, 00:13:58.862 { 00:13:58.862 "method": "bdev_nvme_set_hotplug", 00:13:58.862 "params": { 00:13:58.862 "period_us": 100000, 00:13:58.862 "enable": false 00:13:58.862 } 00:13:58.862 }, 00:13:58.862 { 00:13:58.862 "method": "bdev_wait_for_examine" 00:13:58.862 } 00:13:58.862 ] 00:13:58.862 }, 00:13:58.862 { 00:13:58.862 "subsystem": "nbd", 00:13:58.862 "config": [] 00:13:58.862 } 00:13:58.862 ] 00:13:58.862 }' 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 82045 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82045 ']' 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82045 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82045 00:13:58.862 killing process with pid 82045 00:13:58.862 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.862 00:13:58.862 Latency(us) 00:13:58.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.862 =================================================================================================================== 00:13:58.862 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82045' 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82045 00:13:58.862 [2024-08-11 20:55:09.467745] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:58.862 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82045 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 81996 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 81996 ']' 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 81996 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81996 00:13:59.122 killing process with pid 81996 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81996' 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 81996 00:13:59.122 [2024-08-11 20:55:09.686330] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 81996 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:13:59.122 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:13:59.122 "subsystems": [ 00:13:59.122 { 00:13:59.122 "subsystem": "keyring", 00:13:59.122 "config": [] 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "subsystem": "iobuf", 00:13:59.122 "config": [ 00:13:59.122 { 00:13:59.122 "method": "iobuf_set_options", 00:13:59.122 "params": { 00:13:59.122 "small_pool_count": 8192, 00:13:59.122 "large_pool_count": 1024, 00:13:59.122 "small_bufsize": 8192, 00:13:59.122 "large_bufsize": 135168 00:13:59.122 } 00:13:59.122 } 00:13:59.122 ] 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "subsystem": "sock", 00:13:59.122 "config": [ 00:13:59.122 { 00:13:59.122 "method": "sock_set_default_impl", 00:13:59.122 "params": { 00:13:59.122 "impl_name": "uring" 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "sock_impl_set_options", 00:13:59.122 "params": { 00:13:59.122 "impl_name": "ssl", 00:13:59.122 "recv_buf_size": 4096, 00:13:59.122 "send_buf_size": 4096, 00:13:59.122 "enable_recv_pipe": true, 00:13:59.122 "enable_quickack": false, 00:13:59.122 "enable_placement_id": 0, 00:13:59.122 "enable_zerocopy_send_server": true, 00:13:59.122 "enable_zerocopy_send_client": false, 00:13:59.122 "zerocopy_threshold": 0, 00:13:59.122 "tls_version": 0, 00:13:59.122 "enable_ktls": false 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "sock_impl_set_options", 00:13:59.122 "params": { 00:13:59.122 "impl_name": "posix", 00:13:59.122 "recv_buf_size": 2097152, 00:13:59.122 "send_buf_size": 2097152, 00:13:59.122 "enable_recv_pipe": true, 00:13:59.122 "enable_quickack": false, 00:13:59.122 "enable_placement_id": 0, 00:13:59.122 "enable_zerocopy_send_server": true, 00:13:59.122 "enable_zerocopy_send_client": false, 00:13:59.122 "zerocopy_threshold": 0, 00:13:59.122 "tls_version": 0, 00:13:59.122 "enable_ktls": false 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "sock_impl_set_options", 00:13:59.122 "params": { 00:13:59.122 "impl_name": "uring", 00:13:59.122 "recv_buf_size": 2097152, 00:13:59.122 "send_buf_size": 2097152, 00:13:59.122 "enable_recv_pipe": true, 00:13:59.122 "enable_quickack": false, 00:13:59.122 "enable_placement_id": 0, 00:13:59.122 "enable_zerocopy_send_server": false, 00:13:59.122 "enable_zerocopy_send_client": false, 00:13:59.122 "zerocopy_threshold": 0, 00:13:59.122 "tls_version": 0, 00:13:59.122 "enable_ktls": false 00:13:59.122 } 00:13:59.122 } 00:13:59.122 ] 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "subsystem": "vmd", 00:13:59.122 "config": [] 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "subsystem": "accel", 00:13:59.122 "config": [ 00:13:59.122 { 00:13:59.122 "method": "accel_set_options", 00:13:59.122 "params": { 00:13:59.122 "small_cache_size": 128, 00:13:59.122 "large_cache_size": 16, 00:13:59.122 "task_count": 2048, 00:13:59.122 "sequence_count": 2048, 00:13:59.122 "buf_count": 2048 00:13:59.122 } 00:13:59.122 } 00:13:59.122 ] 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "subsystem": "bdev", 00:13:59.122 "config": [ 00:13:59.122 { 00:13:59.122 "method": "bdev_set_options", 00:13:59.122 "params": { 00:13:59.122 "bdev_io_pool_size": 65535, 00:13:59.122 "bdev_io_cache_size": 256, 00:13:59.122 "bdev_auto_examine": true, 00:13:59.122 "iobuf_small_cache_size": 128, 00:13:59.122 "iobuf_large_cache_size": 16 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "bdev_raid_set_options", 00:13:59.122 "params": { 00:13:59.122 "process_window_size_kb": 1024, 00:13:59.122 "process_max_bandwidth_mb_sec": 0 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "bdev_iscsi_set_options", 00:13:59.122 "params": { 00:13:59.122 "timeout_sec": 30 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "bdev_nvme_set_options", 00:13:59.122 "params": { 00:13:59.122 "action_on_timeout": "none", 00:13:59.122 "timeout_us": 0, 00:13:59.122 "timeout_admin_us": 0, 00:13:59.122 "keep_alive_timeout_ms": 10000, 00:13:59.122 "arbitration_burst": 0, 00:13:59.122 "low_priority_weight": 0, 00:13:59.122 "medium_priority_weight": 0, 00:13:59.122 "high_priority_weight": 0, 00:13:59.122 "nvme_adminq_poll_period_us": 10000, 00:13:59.122 "nvme_ioq_poll_period_us": 0, 00:13:59.122 "io_queue_requests": 0, 00:13:59.122 "delay_cmd_submit": true, 00:13:59.122 "transport_retry_count": 4, 00:13:59.122 "bdev_retry_count": 3, 00:13:59.122 "transport_ack_timeout": 0, 00:13:59.122 "ctrlr_loss_timeout_sec": 0, 00:13:59.122 "reconnect_delay_sec": 0, 00:13:59.122 "fast_io_fail_timeout_sec": 0, 00:13:59.122 "disable_auto_failback": false, 00:13:59.122 "generate_uuids": false, 00:13:59.122 "transport_tos": 0, 00:13:59.122 "nvme_error_stat": false, 00:13:59.122 "rdma_srq_size": 0, 00:13:59.122 "io_path_stat": false, 00:13:59.122 "allow_accel_sequence": false, 00:13:59.122 "rdma_max_cq_size": 0, 00:13:59.122 "rdma_cm_event_timeout_ms": 0, 00:13:59.122 "dhchap_digests": [ 00:13:59.122 "sha256", 00:13:59.122 "sha384", 00:13:59.122 "sha512" 00:13:59.122 ], 00:13:59.122 "dhchap_dhgroups": [ 00:13:59.122 "null", 00:13:59.122 "ffdhe2048", 00:13:59.122 "ffdhe3072", 00:13:59.122 "ffdhe4096", 00:13:59.122 "ffdhe6144", 00:13:59.122 "ffdhe8192" 00:13:59.122 ] 00:13:59.122 } 00:13:59.122 }, 00:13:59.122 { 00:13:59.122 "method": "bdev_nvme_set_hotplug", 00:13:59.122 "params": { 00:13:59.122 "period_us": 100000, 00:13:59.122 "enable": false 00:13:59.122 } 00:13:59.122 }, 00:13:59.123 { 00:13:59.123 "method": "bdev_malloc_create", 00:13:59.123 "params": { 00:13:59.123 "name": "malloc0", 00:13:59.123 "num_blocks": 8192, 00:13:59.123 "block_size": 4096, 00:13:59.123 "physical_block_size": 4096, 00:13:59.123 "uuid": "ebc5936d-c4aa-4e18-901e-6533da881821", 00:13:59.123 "optimal_io_boundary": 0, 00:13:59.123 "md_size": 0, 00:13:59.123 "dif_type": 0, 00:13:59.123 "dif_is_head_of_md": false, 00:13:59.123 "dif_pi_format": 0 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "bdev_wait_for_examine" 00:13:59.123 } 00:13:59.123 ] 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "subsystem": "nbd", 00:13:59.123 "config": [] 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "subsystem": "scheduler", 00:13:59.123 "config": [ 00:13:59.123 { 00:13:59.123 "method": "framework_set_scheduler", 00:13:59.123 "params": { 00:13:59.123 "name": "static" 00:13:59.123 } 00:13:59.123 } 00:13:59.123 ] 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "subsystem": "nvmf", 00:13:59.123 "config": [ 00:13:59.123 { 00:13:59.123 "method": "nvmf_set_config", 00:13:59.123 "params": { 00:13:59.123 "discovery_filter": "match_any", 00:13:59.123 "admin_cmd_passthru": { 00:13:59.123 "identify_ctrlr": false 00:13:59.123 } 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_set_max_subsystems", 00:13:59.123 "params": { 00:13:59.123 "max_subsystems": 1024 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_set_crdt", 00:13:59.123 "params": { 00:13:59.123 "crdt1": 0, 00:13:59.123 "crdt2": 0, 00:13:59.123 "crdt3": 0 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_create_transport", 00:13:59.123 "params": { 00:13:59.123 "trtype": "TCP", 00:13:59.123 "max_queue_depth": 128, 00:13:59.123 "max_io_qpairs_per_ctrlr": 127, 00:13:59.123 "in_capsule_data_size": 4096, 00:13:59.123 "max_io_size": 131072, 00:13:59.123 "io_unit_size": 131072, 00:13:59.123 "max_aq_depth": 128, 00:13:59.123 "num_shared_buffers": 511, 00:13:59.123 "buf_cache_size": 4294967295, 00:13:59.123 "dif_insert_or_strip": false, 00:13:59.123 "zcopy": false, 00:13:59.123 "c2h_success": false, 00:13:59.123 "sock_priority": 0, 00:13:59.123 "abort_timeout_sec": 1, 00:13:59.123 "ack_timeout": 0, 00:13:59.123 "data_wr_pool_size": 0 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_create_subsystem", 00:13:59.123 "params": { 00:13:59.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.123 "allow_any_host": false, 00:13:59.123 "serial_number": "SPDK00000000000001", 00:13:59.123 "model_number": "SPDK bdev Controller", 00:13:59.123 "max_namespaces": 10, 00:13:59.123 "min_cntlid": 1, 00:13:59.123 "max_cntlid": 65519, 00:13:59.123 "ana_reporting": false 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_subsystem_add_host", 00:13:59.123 "params": { 00:13:59.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.123 "host": "nqn.2016-06.io.spdk:host1", 00:13:59.123 "psk": "/tmp/tmp.Mvcj9h1dED" 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_subsystem_add_ns", 00:13:59.123 "params": { 00:13:59.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.123 "namespace": { 00:13:59.123 "nsid": 1, 00:13:59.123 "bdev_name": "malloc0", 00:13:59.123 "nguid": "EBC5936DC4AA4E18901E6533DA881821", 00:13:59.123 "uuid": "ebc5936d-c4aa-4e18-901e-6533da881821", 00:13:59.123 "no_auto_visible": false 00:13:59.123 } 00:13:59.123 } 00:13:59.123 }, 00:13:59.123 { 00:13:59.123 "method": "nvmf_subsystem_add_listener", 00:13:59.123 "params": { 00:13:59.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.123 "listen_address": { 00:13:59.123 "trtype": "TCP", 00:13:59.123 "adrfam": "IPv4", 00:13:59.123 "traddr": "10.0.0.3", 00:13:59.123 "trsvcid": "4420" 00:13:59.123 }, 00:13:59.123 "secure_channel": true 00:13:59.123 } 00:13:59.123 } 00:13:59.123 ] 00:13:59.123 } 00:13:59.123 ] 00:13:59.123 }' 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=82094 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 82094 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82094 ']' 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.123 20:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.382 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:13:59.382 [2024-08-11 20:55:09.942953] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:13:59.382 [2024-08-11 20:55:09.943266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.382 [2024-08-11 20:55:10.080610] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.382 [2024-08-11 20:55:10.142842] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.382 [2024-08-11 20:55:10.143168] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.382 [2024-08-11 20:55:10.143357] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.382 [2024-08-11 20:55:10.143477] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.382 [2024-08-11 20:55:10.143509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.382 [2024-08-11 20:55:10.143703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.641 [2024-08-11 20:55:10.307912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.641 [2024-08-11 20:55:10.370275] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.641 [2024-08-11 20:55:10.386217] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:59.641 [2024-08-11 20:55:10.402236] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:59.641 [2024-08-11 20:55:10.410763] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=82126 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 82126 /var/tmp/bdevperf.sock 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82126 ']' 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:00.209 20:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:00.209 "subsystems": [ 00:14:00.209 { 00:14:00.209 "subsystem": "keyring", 00:14:00.209 "config": [] 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "subsystem": "iobuf", 00:14:00.209 "config": [ 00:14:00.209 { 00:14:00.209 "method": "iobuf_set_options", 00:14:00.209 "params": { 00:14:00.209 "small_pool_count": 8192, 00:14:00.209 "large_pool_count": 1024, 00:14:00.209 "small_bufsize": 8192, 00:14:00.209 "large_bufsize": 135168 00:14:00.209 } 00:14:00.209 } 00:14:00.209 ] 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "subsystem": "sock", 00:14:00.209 "config": [ 00:14:00.209 { 00:14:00.209 "method": "sock_set_default_impl", 00:14:00.209 "params": { 00:14:00.209 "impl_name": "uring" 00:14:00.209 } 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "method": "sock_impl_set_options", 00:14:00.209 "params": { 00:14:00.209 "impl_name": "ssl", 00:14:00.209 "recv_buf_size": 4096, 00:14:00.209 "send_buf_size": 4096, 00:14:00.209 "enable_recv_pipe": true, 00:14:00.209 "enable_quickack": false, 00:14:00.209 "enable_placement_id": 0, 00:14:00.209 "enable_zerocopy_send_server": true, 00:14:00.209 "enable_zerocopy_send_client": false, 00:14:00.209 "zerocopy_threshold": 0, 00:14:00.209 "tls_version": 0, 00:14:00.209 "enable_ktls": false 00:14:00.209 } 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "method": "sock_impl_set_options", 00:14:00.209 "params": { 00:14:00.209 "impl_name": "posix", 00:14:00.209 "recv_buf_size": 2097152, 00:14:00.209 "send_buf_size": 2097152, 00:14:00.209 "enable_recv_pipe": true, 00:14:00.209 "enable_quickack": false, 00:14:00.209 "enable_placement_id": 0, 00:14:00.209 "enable_zerocopy_send_server": true, 00:14:00.209 "enable_zerocopy_send_client": false, 00:14:00.209 "zerocopy_threshold": 0, 00:14:00.209 "tls_version": 0, 00:14:00.209 "enable_ktls": false 00:14:00.209 } 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "method": "sock_impl_set_options", 00:14:00.209 "params": { 00:14:00.209 "impl_name": "uring", 00:14:00.209 "recv_buf_size": 2097152, 00:14:00.209 "send_buf_size": 2097152, 00:14:00.209 "enable_recv_pipe": true, 00:14:00.209 "enable_quickack": false, 00:14:00.209 "enable_placement_id": 0, 00:14:00.209 "enable_zerocopy_send_server": false, 00:14:00.209 "enable_zerocopy_send_client": false, 00:14:00.209 "zerocopy_threshold": 0, 00:14:00.209 "tls_version": 0, 00:14:00.209 "enable_ktls": false 00:14:00.209 } 00:14:00.209 } 00:14:00.209 ] 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "subsystem": "vmd", 00:14:00.209 "config": [] 00:14:00.209 }, 00:14:00.209 { 00:14:00.209 "subsystem": "accel", 00:14:00.209 "config": [ 00:14:00.209 { 00:14:00.209 "method": "accel_set_options", 00:14:00.209 "params": { 00:14:00.209 "small_cache_size": 128, 00:14:00.209 "large_cache_size": 16, 00:14:00.210 "task_count": 2048, 00:14:00.210 "sequence_count": 2048, 00:14:00.210 "buf_count": 2048 00:14:00.210 } 00:14:00.210 } 00:14:00.210 ] 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "subsystem": "bdev", 00:14:00.210 "config": [ 00:14:00.210 { 00:14:00.210 "method": "bdev_set_options", 00:14:00.210 "params": { 00:14:00.210 "bdev_io_pool_size": 65535, 00:14:00.210 "bdev_io_cache_size": 256, 00:14:00.210 "bdev_auto_examine": true, 00:14:00.210 "iobuf_small_cache_size": 128, 00:14:00.210 "iobuf_large_cache_size": 16 00:14:00.210 } 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "method": "bdev_raid_set_options", 00:14:00.210 "params": { 00:14:00.210 "process_window_size_kb": 1024, 00:14:00.210 "process_max_bandwidth_mb_sec": 0 00:14:00.210 } 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "method": "bdev_iscsi_set_options", 00:14:00.210 "params": { 00:14:00.210 "timeout_sec": 30 00:14:00.210 } 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "method": "bdev_nvme_set_options", 00:14:00.210 "params": { 00:14:00.210 "action_on_timeout": "none", 00:14:00.210 "timeout_us": 0, 00:14:00.210 "timeout_admin_us": 0, 00:14:00.210 "keep_alive_timeout_ms": 10000, 00:14:00.210 "arbitration_burst": 0, 00:14:00.210 "low_priority_weight": 0, 00:14:00.210 "medium_priority_weight": 0, 00:14:00.210 "high_priority_weight": 0, 00:14:00.210 "nvme_adminq_poll_period_us": 10000, 00:14:00.210 "nvme_ioq_poll_period_us": 0, 00:14:00.210 "io_queue_requests": 512, 00:14:00.210 "delay_cmd_submit": true, 00:14:00.210 "transport_retry_count": 4, 00:14:00.210 "bdev_retry_count": 3, 00:14:00.210 "transport_ack_timeout": 0, 00:14:00.210 "ctrlr_loss_timeout_sec": 0, 00:14:00.210 "reconnect_delay_sec": 0, 00:14:00.210 "fast_io_fail_timeout_sec": 0, 00:14:00.210 "disable_auto_failback": false, 00:14:00.210 "generate_uuids": false, 00:14:00.210 "transport_tos": 0, 00:14:00.210 "nvme_error_stat": false, 00:14:00.210 "rdma_srq_size": 0, 00:14:00.210 "io_path_stat": false, 00:14:00.210 "allow_accel_sequence": false, 00:14:00.210 "rdma_max_cq_size": 0, 00:14:00.210 "rdma_cm_event_timeout_ms": 0, 00:14:00.210 "dhchap_digests": [ 00:14:00.210 "sha256", 00:14:00.210 "sha384", 00:14:00.210 "sha512" 00:14:00.210 ], 00:14:00.210 "dhchap_dhgroups": [ 00:14:00.210 "null", 00:14:00.210 "ffdhe2048", 00:14:00.210 "ffdhe3072", 00:14:00.210 "ffdhe4096", 00:14:00.210 "ffdhe6144", 00:14:00.210 "ffdhe8192" 00:14:00.210 ] 00:14:00.210 } 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "method": "bdev_nvme_attach_controller", 00:14:00.210 "params": { 00:14:00.210 "name": "TLSTEST", 00:14:00.210 "trtype": "TCP", 00:14:00.210 "adrfam": "IPv4", 00:14:00.210 "traddr": "10.0.0.3", 00:14:00.210 "trsvcid": "4420", 00:14:00.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.210 "prchk_reftag": false, 00:14:00.210 "prchk_guard": false, 00:14:00.210 "ctrlr_loss_timeout_sec": 0, 00:14:00.210 "reconnect_delay_sec": 0, 00:14:00.210 "fast_io_fail_timeout_sec": 0, 00:14:00.210 "psk": "/tmp/tmp.Mvcj9h1dED", 00:14:00.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.210 "hdgst": false, 00:14:00.210 "ddgst": false 00:14:00.210 } 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "method": "bdev_nvme_set_hotplug", 00:14:00.210 "params": { 00:14:00.210 "period_us": 100000, 00:14:00.210 "enable": false 00:14:00.210 } 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "method": "bdev_wait_for_examine" 00:14:00.210 } 00:14:00.210 ] 00:14:00.210 }, 00:14:00.210 { 00:14:00.210 "subsystem": "nbd", 00:14:00.210 "config": [] 00:14:00.210 } 00:14:00.210 ] 00:14:00.210 }' 00:14:00.469 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:00.469 [2024-08-11 20:55:10.991340] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:00.469 [2024-08-11 20:55:10.991651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82126 ] 00:14:00.469 [2024-08-11 20:55:11.133296] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.469 [2024-08-11 20:55:11.207469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.727 [2024-08-11 20:55:11.344982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.727 [2024-08-11 20:55:11.379022] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.727 [2024-08-11 20:55:11.379145] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:01.294 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:01.294 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:01.294 20:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:01.294 Running I/O for 10 seconds... 00:14:13.506 00:14:13.506 Latency(us) 00:14:13.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.506 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.506 Verification LBA range: start 0x0 length 0x2000 00:14:13.506 TLSTESTn1 : 10.01 4349.62 16.99 0.00 0.00 29376.17 6017.40 29669.93 00:14:13.506 =================================================================================================================== 00:14:13.506 Total : 4349.62 16.99 0.00 0.00 29376.17 6017.40 29669.93 00:14:13.506 0 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 82126 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82126 ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82126 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82126 00:14:13.506 killing process with pid 82126 00:14:13.506 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.506 00:14:13.506 Latency(us) 00:14:13.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.506 =================================================================================================================== 00:14:13.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82126' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82126 00:14:13.506 [2024-08-11 20:55:22.111799] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82126 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 82094 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82094 ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82094 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82094 00:14:13.506 killing process with pid 82094 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82094' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82094 00:14:13.506 [2024-08-11 20:55:22.327733] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82094 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=82265 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 82265 00:14:13.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82265 ']' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:13.506 20:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.506 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:13.506 [2024-08-11 20:55:22.595514] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:13.506 [2024-08-11 20:55:22.595980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.506 [2024-08-11 20:55:22.734827] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.506 [2024-08-11 20:55:22.800520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.506 [2024-08-11 20:55:22.800608] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.506 [2024-08-11 20:55:22.800625] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.506 [2024-08-11 20:55:22.800636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.506 [2024-08-11 20:55:22.800645] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.506 [2024-08-11 20:55:22.800679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.506 [2024-08-11 20:55:22.855009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Mvcj9h1dED 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Mvcj9h1dED 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.507 [2024-08-11 20:55:23.734947] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.507 20:55:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:13.507 [2024-08-11 20:55:24.195039] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.507 [2024-08-11 20:55:24.195252] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:13.507 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:13.765 malloc0 00:14:13.765 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.023 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Mvcj9h1dED 00:14:14.282 [2024-08-11 20:55:24.905255] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=82314 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 82314 /var/tmp/bdevperf.sock 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82314 ']' 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.282 20:55:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.282 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:14.282 [2024-08-11 20:55:24.981705] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:14.282 [2024-08-11 20:55:24.981814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82314 ] 00:14:14.541 [2024-08-11 20:55:25.124525] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.541 [2024-08-11 20:55:25.201815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.541 [2024-08-11 20:55:25.259212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.477 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.477 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:15.477 20:55:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Mvcj9h1dED 00:14:15.477 20:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:15.736 [2024-08-11 20:55:26.325702] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.736 nvme0n1 00:14:15.736 20:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:15.995 Running I/O for 1 seconds... 00:14:16.930 00:14:16.930 Latency(us) 00:14:16.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.930 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:16.931 Verification LBA range: start 0x0 length 0x2000 00:14:16.931 nvme0n1 : 1.02 4685.17 18.30 0.00 0.00 26981.73 3872.58 17992.61 00:14:16.931 =================================================================================================================== 00:14:16.931 Total : 4685.17 18.30 0.00 0.00 26981.73 3872.58 17992.61 00:14:16.931 0 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 82314 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82314 ']' 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82314 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82314 00:14:16.931 killing process with pid 82314 00:14:16.931 Received shutdown signal, test time was about 1.000000 seconds 00:14:16.931 00:14:16.931 Latency(us) 00:14:16.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.931 =================================================================================================================== 00:14:16.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82314' 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82314 00:14:16.931 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82314 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 82265 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82265 ']' 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82265 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82265 00:14:17.190 killing process with pid 82265 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82265' 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82265 00:14:17.190 [2024-08-11 20:55:27.840386] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:17.190 20:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82265 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=82360 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 82360 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82360 ']' 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:17.449 20:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.449 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:17.449 [2024-08-11 20:55:28.095127] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:17.449 [2024-08-11 20:55:28.095609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.708 [2024-08-11 20:55:28.228286] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.708 [2024-08-11 20:55:28.279831] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.708 [2024-08-11 20:55:28.279888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.708 [2024-08-11 20:55:28.279898] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.708 [2024-08-11 20:55:28.279904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.708 [2024-08-11 20:55:28.279910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.708 [2024-08-11 20:55:28.279933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.708 [2024-08-11 20:55:28.329289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.273 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:18.273 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:18.273 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:18.273 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.273 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.531 [2024-08-11 20:55:29.088426] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.531 malloc0 00:14:18.531 [2024-08-11 20:55:29.119811] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:18.531 [2024-08-11 20:55:29.120173] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=82398 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:18.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 82398 /var/tmp/bdevperf.sock 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82398 ']' 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:18.531 20:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.531 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:18.531 [2024-08-11 20:55:29.201031] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:18.531 [2024-08-11 20:55:29.201255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82398 ] 00:14:18.789 [2024-08-11 20:55:29.340586] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.789 [2024-08-11 20:55:29.417005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.789 [2024-08-11 20:55:29.476317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.741 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:19.741 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:19.741 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Mvcj9h1dED 00:14:19.741 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:20.028 [2024-08-11 20:55:30.739520] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.287 nvme0n1 00:14:20.287 20:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:20.287 Running I/O for 1 seconds... 00:14:21.221 00:14:21.221 Latency(us) 00:14:21.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.221 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:21.221 Verification LBA range: start 0x0 length 0x2000 00:14:21.222 nvme0n1 : 1.03 4866.64 19.01 0.00 0.00 26053.61 6106.76 16562.73 00:14:21.222 =================================================================================================================== 00:14:21.222 Total : 4866.64 19.01 0.00 0.00 26053.61 6106.76 16562.73 00:14:21.222 0 00:14:21.480 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:21.480 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:21.480 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.480 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:21.480 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:21.480 "subsystems": [ 00:14:21.480 { 00:14:21.480 "subsystem": "keyring", 00:14:21.480 "config": [ 00:14:21.480 { 00:14:21.480 "method": "keyring_file_add_key", 00:14:21.480 "params": { 00:14:21.480 "name": "key0", 00:14:21.480 "path": "/tmp/tmp.Mvcj9h1dED" 00:14:21.480 } 00:14:21.480 } 00:14:21.480 ] 00:14:21.480 }, 00:14:21.480 { 00:14:21.480 "subsystem": "iobuf", 00:14:21.480 "config": [ 00:14:21.480 { 00:14:21.480 "method": "iobuf_set_options", 00:14:21.480 "params": { 00:14:21.480 "small_pool_count": 8192, 00:14:21.480 "large_pool_count": 1024, 00:14:21.480 "small_bufsize": 8192, 00:14:21.480 "large_bufsize": 135168 00:14:21.480 } 00:14:21.480 } 00:14:21.480 ] 00:14:21.480 }, 00:14:21.480 { 00:14:21.480 "subsystem": "sock", 00:14:21.480 "config": [ 00:14:21.480 { 00:14:21.480 "method": "sock_set_default_impl", 00:14:21.480 "params": { 00:14:21.480 "impl_name": "uring" 00:14:21.480 } 00:14:21.480 }, 00:14:21.480 { 00:14:21.480 "method": "sock_impl_set_options", 00:14:21.480 "params": { 00:14:21.480 "impl_name": "ssl", 00:14:21.480 "recv_buf_size": 4096, 00:14:21.480 "send_buf_size": 4096, 00:14:21.480 "enable_recv_pipe": true, 00:14:21.480 "enable_quickack": false, 00:14:21.480 "enable_placement_id": 0, 00:14:21.480 "enable_zerocopy_send_server": true, 00:14:21.480 "enable_zerocopy_send_client": false, 00:14:21.480 "zerocopy_threshold": 0, 00:14:21.480 "tls_version": 0, 00:14:21.480 "enable_ktls": false 00:14:21.480 } 00:14:21.480 }, 00:14:21.480 { 00:14:21.480 "method": "sock_impl_set_options", 00:14:21.480 "params": { 00:14:21.480 "impl_name": "posix", 00:14:21.480 "recv_buf_size": 2097152, 00:14:21.480 "send_buf_size": 2097152, 00:14:21.480 "enable_recv_pipe": true, 00:14:21.480 "enable_quickack": false, 00:14:21.480 "enable_placement_id": 0, 00:14:21.480 "enable_zerocopy_send_server": true, 00:14:21.480 "enable_zerocopy_send_client": false, 00:14:21.480 "zerocopy_threshold": 0, 00:14:21.480 "tls_version": 0, 00:14:21.480 "enable_ktls": false 00:14:21.480 } 00:14:21.480 }, 00:14:21.480 { 00:14:21.480 "method": "sock_impl_set_options", 00:14:21.480 "params": { 00:14:21.480 "impl_name": "uring", 00:14:21.480 "recv_buf_size": 2097152, 00:14:21.481 "send_buf_size": 2097152, 00:14:21.481 "enable_recv_pipe": true, 00:14:21.481 "enable_quickack": false, 00:14:21.481 "enable_placement_id": 0, 00:14:21.481 "enable_zerocopy_send_server": false, 00:14:21.481 "enable_zerocopy_send_client": false, 00:14:21.481 "zerocopy_threshold": 0, 00:14:21.481 "tls_version": 0, 00:14:21.481 "enable_ktls": false 00:14:21.481 } 00:14:21.481 } 00:14:21.481 ] 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "subsystem": "vmd", 00:14:21.481 "config": [] 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "subsystem": "accel", 00:14:21.481 "config": [ 00:14:21.481 { 00:14:21.481 "method": "accel_set_options", 00:14:21.481 "params": { 00:14:21.481 "small_cache_size": 128, 00:14:21.481 "large_cache_size": 16, 00:14:21.481 "task_count": 2048, 00:14:21.481 "sequence_count": 2048, 00:14:21.481 "buf_count": 2048 00:14:21.481 } 00:14:21.481 } 00:14:21.481 ] 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "subsystem": "bdev", 00:14:21.481 "config": [ 00:14:21.481 { 00:14:21.481 "method": "bdev_set_options", 00:14:21.481 "params": { 00:14:21.481 "bdev_io_pool_size": 65535, 00:14:21.481 "bdev_io_cache_size": 256, 00:14:21.481 "bdev_auto_examine": true, 00:14:21.481 "iobuf_small_cache_size": 128, 00:14:21.481 "iobuf_large_cache_size": 16 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "bdev_raid_set_options", 00:14:21.481 "params": { 00:14:21.481 "process_window_size_kb": 1024, 00:14:21.481 "process_max_bandwidth_mb_sec": 0 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "bdev_iscsi_set_options", 00:14:21.481 "params": { 00:14:21.481 "timeout_sec": 30 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "bdev_nvme_set_options", 00:14:21.481 "params": { 00:14:21.481 "action_on_timeout": "none", 00:14:21.481 "timeout_us": 0, 00:14:21.481 "timeout_admin_us": 0, 00:14:21.481 "keep_alive_timeout_ms": 10000, 00:14:21.481 "arbitration_burst": 0, 00:14:21.481 "low_priority_weight": 0, 00:14:21.481 "medium_priority_weight": 0, 00:14:21.481 "high_priority_weight": 0, 00:14:21.481 "nvme_adminq_poll_period_us": 10000, 00:14:21.481 "nvme_ioq_poll_period_us": 0, 00:14:21.481 "io_queue_requests": 0, 00:14:21.481 "delay_cmd_submit": true, 00:14:21.481 "transport_retry_count": 4, 00:14:21.481 "bdev_retry_count": 3, 00:14:21.481 "transport_ack_timeout": 0, 00:14:21.481 "ctrlr_loss_timeout_sec": 0, 00:14:21.481 "reconnect_delay_sec": 0, 00:14:21.481 "fast_io_fail_timeout_sec": 0, 00:14:21.481 "disable_auto_failback": false, 00:14:21.481 "generate_uuids": false, 00:14:21.481 "transport_tos": 0, 00:14:21.481 "nvme_error_stat": false, 00:14:21.481 "rdma_srq_size": 0, 00:14:21.481 "io_path_stat": false, 00:14:21.481 "allow_accel_sequence": false, 00:14:21.481 "rdma_max_cq_size": 0, 00:14:21.481 "rdma_cm_event_timeout_ms": 0, 00:14:21.481 "dhchap_digests": [ 00:14:21.481 "sha256", 00:14:21.481 "sha384", 00:14:21.481 "sha512" 00:14:21.481 ], 00:14:21.481 "dhchap_dhgroups": [ 00:14:21.481 "null", 00:14:21.481 "ffdhe2048", 00:14:21.481 "ffdhe3072", 00:14:21.481 "ffdhe4096", 00:14:21.481 "ffdhe6144", 00:14:21.481 "ffdhe8192" 00:14:21.481 ] 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "bdev_nvme_set_hotplug", 00:14:21.481 "params": { 00:14:21.481 "period_us": 100000, 00:14:21.481 "enable": false 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "bdev_malloc_create", 00:14:21.481 "params": { 00:14:21.481 "name": "malloc0", 00:14:21.481 "num_blocks": 8192, 00:14:21.481 "block_size": 4096, 00:14:21.481 "physical_block_size": 4096, 00:14:21.481 "uuid": "ea4efb05-3f25-4df5-b954-104201e14ef0", 00:14:21.481 "optimal_io_boundary": 0, 00:14:21.481 "md_size": 0, 00:14:21.481 "dif_type": 0, 00:14:21.481 "dif_is_head_of_md": false, 00:14:21.481 "dif_pi_format": 0 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "bdev_wait_for_examine" 00:14:21.481 } 00:14:21.481 ] 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "subsystem": "nbd", 00:14:21.481 "config": [] 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "subsystem": "scheduler", 00:14:21.481 "config": [ 00:14:21.481 { 00:14:21.481 "method": "framework_set_scheduler", 00:14:21.481 "params": { 00:14:21.481 "name": "static" 00:14:21.481 } 00:14:21.481 } 00:14:21.481 ] 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "subsystem": "nvmf", 00:14:21.481 "config": [ 00:14:21.481 { 00:14:21.481 "method": "nvmf_set_config", 00:14:21.481 "params": { 00:14:21.481 "discovery_filter": "match_any", 00:14:21.481 "admin_cmd_passthru": { 00:14:21.481 "identify_ctrlr": false 00:14:21.481 } 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_set_max_subsystems", 00:14:21.481 "params": { 00:14:21.481 "max_subsystems": 1024 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_set_crdt", 00:14:21.481 "params": { 00:14:21.481 "crdt1": 0, 00:14:21.481 "crdt2": 0, 00:14:21.481 "crdt3": 0 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_create_transport", 00:14:21.481 "params": { 00:14:21.481 "trtype": "TCP", 00:14:21.481 "max_queue_depth": 128, 00:14:21.481 "max_io_qpairs_per_ctrlr": 127, 00:14:21.481 "in_capsule_data_size": 4096, 00:14:21.481 "max_io_size": 131072, 00:14:21.481 "io_unit_size": 131072, 00:14:21.481 "max_aq_depth": 128, 00:14:21.481 "num_shared_buffers": 511, 00:14:21.481 "buf_cache_size": 4294967295, 00:14:21.481 "dif_insert_or_strip": false, 00:14:21.481 "zcopy": false, 00:14:21.481 "c2h_success": false, 00:14:21.481 "sock_priority": 0, 00:14:21.481 "abort_timeout_sec": 1, 00:14:21.481 "ack_timeout": 0, 00:14:21.481 "data_wr_pool_size": 0 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_create_subsystem", 00:14:21.481 "params": { 00:14:21.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.481 "allow_any_host": false, 00:14:21.481 "serial_number": "00000000000000000000", 00:14:21.481 "model_number": "SPDK bdev Controller", 00:14:21.481 "max_namespaces": 32, 00:14:21.481 "min_cntlid": 1, 00:14:21.481 "max_cntlid": 65519, 00:14:21.481 "ana_reporting": false 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_subsystem_add_host", 00:14:21.481 "params": { 00:14:21.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.481 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.481 "psk": "key0" 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_subsystem_add_ns", 00:14:21.481 "params": { 00:14:21.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.481 "namespace": { 00:14:21.481 "nsid": 1, 00:14:21.481 "bdev_name": "malloc0", 00:14:21.481 "nguid": "EA4EFB053F254DF5B954104201E14EF0", 00:14:21.481 "uuid": "ea4efb05-3f25-4df5-b954-104201e14ef0", 00:14:21.481 "no_auto_visible": false 00:14:21.481 } 00:14:21.481 } 00:14:21.481 }, 00:14:21.481 { 00:14:21.481 "method": "nvmf_subsystem_add_listener", 00:14:21.481 "params": { 00:14:21.481 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.481 "listen_address": { 00:14:21.481 "trtype": "TCP", 00:14:21.481 "adrfam": "IPv4", 00:14:21.481 "traddr": "10.0.0.3", 00:14:21.481 "trsvcid": "4420" 00:14:21.481 }, 00:14:21.481 "secure_channel": false, 00:14:21.481 "sock_impl": "ssl" 00:14:21.481 } 00:14:21.481 } 00:14:21.481 ] 00:14:21.481 } 00:14:21.481 ] 00:14:21.481 }' 00:14:21.481 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:21.740 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:21.740 "subsystems": [ 00:14:21.740 { 00:14:21.740 "subsystem": "keyring", 00:14:21.740 "config": [ 00:14:21.740 { 00:14:21.740 "method": "keyring_file_add_key", 00:14:21.740 "params": { 00:14:21.740 "name": "key0", 00:14:21.740 "path": "/tmp/tmp.Mvcj9h1dED" 00:14:21.740 } 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "subsystem": "iobuf", 00:14:21.740 "config": [ 00:14:21.740 { 00:14:21.740 "method": "iobuf_set_options", 00:14:21.740 "params": { 00:14:21.740 "small_pool_count": 8192, 00:14:21.740 "large_pool_count": 1024, 00:14:21.740 "small_bufsize": 8192, 00:14:21.740 "large_bufsize": 135168 00:14:21.740 } 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "subsystem": "sock", 00:14:21.740 "config": [ 00:14:21.740 { 00:14:21.740 "method": "sock_set_default_impl", 00:14:21.740 "params": { 00:14:21.740 "impl_name": "uring" 00:14:21.740 } 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "method": "sock_impl_set_options", 00:14:21.740 "params": { 00:14:21.740 "impl_name": "ssl", 00:14:21.740 "recv_buf_size": 4096, 00:14:21.740 "send_buf_size": 4096, 00:14:21.740 "enable_recv_pipe": true, 00:14:21.740 "enable_quickack": false, 00:14:21.740 "enable_placement_id": 0, 00:14:21.741 "enable_zerocopy_send_server": true, 00:14:21.741 "enable_zerocopy_send_client": false, 00:14:21.741 "zerocopy_threshold": 0, 00:14:21.741 "tls_version": 0, 00:14:21.741 "enable_ktls": false 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "sock_impl_set_options", 00:14:21.741 "params": { 00:14:21.741 "impl_name": "posix", 00:14:21.741 "recv_buf_size": 2097152, 00:14:21.741 "send_buf_size": 2097152, 00:14:21.741 "enable_recv_pipe": true, 00:14:21.741 "enable_quickack": false, 00:14:21.741 "enable_placement_id": 0, 00:14:21.741 "enable_zerocopy_send_server": true, 00:14:21.741 "enable_zerocopy_send_client": false, 00:14:21.741 "zerocopy_threshold": 0, 00:14:21.741 "tls_version": 0, 00:14:21.741 "enable_ktls": false 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "sock_impl_set_options", 00:14:21.741 "params": { 00:14:21.741 "impl_name": "uring", 00:14:21.741 "recv_buf_size": 2097152, 00:14:21.741 "send_buf_size": 2097152, 00:14:21.741 "enable_recv_pipe": true, 00:14:21.741 "enable_quickack": false, 00:14:21.741 "enable_placement_id": 0, 00:14:21.741 "enable_zerocopy_send_server": false, 00:14:21.741 "enable_zerocopy_send_client": false, 00:14:21.741 "zerocopy_threshold": 0, 00:14:21.741 "tls_version": 0, 00:14:21.741 "enable_ktls": false 00:14:21.741 } 00:14:21.741 } 00:14:21.741 ] 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "subsystem": "vmd", 00:14:21.741 "config": [] 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "subsystem": "accel", 00:14:21.741 "config": [ 00:14:21.741 { 00:14:21.741 "method": "accel_set_options", 00:14:21.741 "params": { 00:14:21.741 "small_cache_size": 128, 00:14:21.741 "large_cache_size": 16, 00:14:21.741 "task_count": 2048, 00:14:21.741 "sequence_count": 2048, 00:14:21.741 "buf_count": 2048 00:14:21.741 } 00:14:21.741 } 00:14:21.741 ] 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "subsystem": "bdev", 00:14:21.741 "config": [ 00:14:21.741 { 00:14:21.741 "method": "bdev_set_options", 00:14:21.741 "params": { 00:14:21.741 "bdev_io_pool_size": 65535, 00:14:21.741 "bdev_io_cache_size": 256, 00:14:21.741 "bdev_auto_examine": true, 00:14:21.741 "iobuf_small_cache_size": 128, 00:14:21.741 "iobuf_large_cache_size": 16 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_raid_set_options", 00:14:21.741 "params": { 00:14:21.741 "process_window_size_kb": 1024, 00:14:21.741 "process_max_bandwidth_mb_sec": 0 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_iscsi_set_options", 00:14:21.741 "params": { 00:14:21.741 "timeout_sec": 30 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_nvme_set_options", 00:14:21.741 "params": { 00:14:21.741 "action_on_timeout": "none", 00:14:21.741 "timeout_us": 0, 00:14:21.741 "timeout_admin_us": 0, 00:14:21.741 "keep_alive_timeout_ms": 10000, 00:14:21.741 "arbitration_burst": 0, 00:14:21.741 "low_priority_weight": 0, 00:14:21.741 "medium_priority_weight": 0, 00:14:21.741 "high_priority_weight": 0, 00:14:21.741 "nvme_adminq_poll_period_us": 10000, 00:14:21.741 "nvme_ioq_poll_period_us": 0, 00:14:21.741 "io_queue_requests": 512, 00:14:21.741 "delay_cmd_submit": true, 00:14:21.741 "transport_retry_count": 4, 00:14:21.741 "bdev_retry_count": 3, 00:14:21.741 "transport_ack_timeout": 0, 00:14:21.741 "ctrlr_loss_timeout_sec": 0, 00:14:21.741 "reconnect_delay_sec": 0, 00:14:21.741 "fast_io_fail_timeout_sec": 0, 00:14:21.741 "disable_auto_failback": false, 00:14:21.741 "generate_uuids": false, 00:14:21.741 "transport_tos": 0, 00:14:21.741 "nvme_error_stat": false, 00:14:21.741 "rdma_srq_size": 0, 00:14:21.741 "io_path_stat": false, 00:14:21.741 "allow_accel_sequence": false, 00:14:21.741 "rdma_max_cq_size": 0, 00:14:21.741 "rdma_cm_event_timeout_ms": 0, 00:14:21.741 "dhchap_digests": [ 00:14:21.741 "sha256", 00:14:21.741 "sha384", 00:14:21.741 "sha512" 00:14:21.741 ], 00:14:21.741 "dhchap_dhgroups": [ 00:14:21.741 "null", 00:14:21.741 "ffdhe2048", 00:14:21.741 "ffdhe3072", 00:14:21.741 "ffdhe4096", 00:14:21.741 "ffdhe6144", 00:14:21.741 "ffdhe8192" 00:14:21.741 ] 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_nvme_attach_controller", 00:14:21.741 "params": { 00:14:21.741 "name": "nvme0", 00:14:21.741 "trtype": "TCP", 00:14:21.741 "adrfam": "IPv4", 00:14:21.741 "traddr": "10.0.0.3", 00:14:21.741 "trsvcid": "4420", 00:14:21.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.741 "prchk_reftag": false, 00:14:21.741 "prchk_guard": false, 00:14:21.741 "ctrlr_loss_timeout_sec": 0, 00:14:21.741 "reconnect_delay_sec": 0, 00:14:21.741 "fast_io_fail_timeout_sec": 0, 00:14:21.741 "psk": "key0", 00:14:21.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.741 "hdgst": false, 00:14:21.741 "ddgst": false 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_nvme_set_hotplug", 00:14:21.741 "params": { 00:14:21.741 "period_us": 100000, 00:14:21.741 "enable": false 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_enable_histogram", 00:14:21.741 "params": { 00:14:21.741 "name": "nvme0n1", 00:14:21.741 "enable": true 00:14:21.741 } 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "method": "bdev_wait_for_examine" 00:14:21.741 } 00:14:21.741 ] 00:14:21.741 }, 00:14:21.741 { 00:14:21.741 "subsystem": "nbd", 00:14:21.741 "config": [] 00:14:21.741 } 00:14:21.741 ] 00:14:21.741 }' 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 82398 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82398 ']' 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82398 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82398 00:14:21.741 killing process with pid 82398 00:14:21.741 Received shutdown signal, test time was about 1.000000 seconds 00:14:21.741 00:14:21.741 Latency(us) 00:14:21.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.741 =================================================================================================================== 00:14:21.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82398' 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82398 00:14:21.741 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82398 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 82360 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82360 ']' 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82360 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82360 00:14:22.000 killing process with pid 82360 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82360' 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82360 00:14:22.000 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82360 00:14:22.258 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:22.258 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:22.258 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:22.258 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:22.258 "subsystems": [ 00:14:22.258 { 00:14:22.258 "subsystem": "keyring", 00:14:22.258 "config": [ 00:14:22.258 { 00:14:22.258 "method": "keyring_file_add_key", 00:14:22.258 "params": { 00:14:22.258 "name": "key0", 00:14:22.258 "path": "/tmp/tmp.Mvcj9h1dED" 00:14:22.258 } 00:14:22.258 } 00:14:22.258 ] 00:14:22.258 }, 00:14:22.258 { 00:14:22.258 "subsystem": "iobuf", 00:14:22.258 "config": [ 00:14:22.258 { 00:14:22.258 "method": "iobuf_set_options", 00:14:22.258 "params": { 00:14:22.259 "small_pool_count": 8192, 00:14:22.259 "large_pool_count": 1024, 00:14:22.259 "small_bufsize": 8192, 00:14:22.259 "large_bufsize": 135168 00:14:22.259 } 00:14:22.259 } 00:14:22.259 ] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "sock", 00:14:22.259 "config": [ 00:14:22.259 { 00:14:22.259 "method": "sock_set_default_impl", 00:14:22.259 "params": { 00:14:22.259 "impl_name": "uring" 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "sock_impl_set_options", 00:14:22.259 "params": { 00:14:22.259 "impl_name": "ssl", 00:14:22.259 "recv_buf_size": 4096, 00:14:22.259 "send_buf_size": 4096, 00:14:22.259 "enable_recv_pipe": true, 00:14:22.259 "enable_quickack": false, 00:14:22.259 "enable_placement_id": 0, 00:14:22.259 "enable_zerocopy_send_server": true, 00:14:22.259 "enable_zerocopy_send_client": false, 00:14:22.259 "zerocopy_threshold": 0, 00:14:22.259 "tls_version": 0, 00:14:22.259 "enable_ktls": false 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "sock_impl_set_options", 00:14:22.259 "params": { 00:14:22.259 "impl_name": "posix", 00:14:22.259 "recv_buf_size": 2097152, 00:14:22.259 "send_buf_size": 2097152, 00:14:22.259 "enable_recv_pipe": true, 00:14:22.259 "enable_quickack": false, 00:14:22.259 "enable_placement_id": 0, 00:14:22.259 "enable_zerocopy_send_server": true, 00:14:22.259 "enable_zerocopy_send_client": false, 00:14:22.259 "zerocopy_threshold": 0, 00:14:22.259 "tls_version": 0, 00:14:22.259 "enable_ktls": false 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "sock_impl_set_options", 00:14:22.259 "params": { 00:14:22.259 "impl_name": "uring", 00:14:22.259 "recv_buf_size": 2097152, 00:14:22.259 "send_buf_size": 2097152, 00:14:22.259 "enable_recv_pipe": true, 00:14:22.259 "enable_quickack": false, 00:14:22.259 "enable_placement_id": 0, 00:14:22.259 "enable_zerocopy_send_server": false, 00:14:22.259 "enable_zerocopy_send_client": false, 00:14:22.259 "zerocopy_threshold": 0, 00:14:22.259 "tls_version": 0, 00:14:22.259 "enable_ktls": false 00:14:22.259 } 00:14:22.259 } 00:14:22.259 ] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "vmd", 00:14:22.259 "config": [] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "accel", 00:14:22.259 "config": [ 00:14:22.259 { 00:14:22.259 "method": "accel_set_options", 00:14:22.259 "params": { 00:14:22.259 "small_cache_size": 128, 00:14:22.259 "large_cache_size": 16, 00:14:22.259 "task_count": 2048, 00:14:22.259 "sequence_count": 2048, 00:14:22.259 "buf_count": 2048 00:14:22.259 } 00:14:22.259 } 00:14:22.259 ] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "bdev", 00:14:22.259 "config": [ 00:14:22.259 { 00:14:22.259 "method": "bdev_set_options", 00:14:22.259 "params": { 00:14:22.259 "bdev_io_pool_size": 65535, 00:14:22.259 "bdev_io_cache_size": 256, 00:14:22.259 "bdev_auto_examine": true, 00:14:22.259 "iobuf_small_cache_size": 128, 00:14:22.259 "iobuf_large_cache_size": 16 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "bdev_raid_set_options", 00:14:22.259 "params": { 00:14:22.259 "process_window_size_kb": 1024, 00:14:22.259 "process_max_bandwidth_mb_sec": 0 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "bdev_iscsi_set_options", 00:14:22.259 "params": { 00:14:22.259 "timeout_sec": 30 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "bdev_nvme_set_options", 00:14:22.259 "params": { 00:14:22.259 "action_on_timeout": "none", 00:14:22.259 "timeout_us": 0, 00:14:22.259 "timeout_admin_us": 0, 00:14:22.259 "keep_alive_timeout_ms": 10000, 00:14:22.259 "arbitration_burst": 0, 00:14:22.259 "low_priority_weight": 0, 00:14:22.259 "medium_priority_weight": 0, 00:14:22.259 "high_priority_weight": 0, 00:14:22.259 "nvme_adminq_poll_period_us": 10000, 00:14:22.259 "nvme_ioq_poll_period_us": 0, 00:14:22.259 "io_queue_requests": 0, 00:14:22.259 "delay_cmd_submit": true, 00:14:22.259 "transport_retry_count": 4, 00:14:22.259 "bdev_retry_count": 3, 00:14:22.259 "transport_ack_timeout": 0, 00:14:22.259 "ctrlr_loss_timeout_sec": 0, 00:14:22.259 "reconnect_delay_sec": 0, 00:14:22.259 "fast_io_fail_timeout_sec": 0, 00:14:22.259 "disable_auto_failback": false, 00:14:22.259 "generate_uuids": false, 00:14:22.259 "transport_tos": 0, 00:14:22.259 "nvme_error_stat": false, 00:14:22.259 "rdma_srq_size": 0, 00:14:22.259 "io_path_stat": false, 00:14:22.259 "allow_accel_sequence": false, 00:14:22.259 "rdma_max_cq_size": 0, 00:14:22.259 "rdma_cm_event_timeout_ms": 0, 00:14:22.259 "dhchap_digests": [ 00:14:22.259 "sha256", 00:14:22.259 "sha384", 00:14:22.259 "sha512" 00:14:22.259 ], 00:14:22.259 "dhchap_dhgroups": [ 00:14:22.259 "null", 00:14:22.259 "ffdhe2048", 00:14:22.259 "ffdhe3072", 00:14:22.259 "ffdhe4096", 00:14:22.259 "ffdhe6144", 00:14:22.259 "ffdhe8192" 00:14:22.259 ] 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "bdev_nvme_set_hotplug", 00:14:22.259 "params": { 00:14:22.259 "period_us": 100000, 00:14:22.259 "enable": false 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "bdev_malloc_create", 00:14:22.259 "params": { 00:14:22.259 "name": "malloc0", 00:14:22.259 "num_blocks": 8192, 00:14:22.259 "block_size": 4096, 00:14:22.259 "physical_block_size": 4096, 00:14:22.259 "uuid": "ea4efb05-3f25-4df5-b954-104201e14ef0", 00:14:22.259 "optimal_io_boundary": 0, 00:14:22.259 "md_size": 0, 00:14:22.259 "dif_type": 0, 00:14:22.259 "dif_is_head_of_md": false, 00:14:22.259 "dif_pi_format": 0 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "bdev_wait_for_examine" 00:14:22.259 } 00:14:22.259 ] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "nbd", 00:14:22.259 "config": [] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "scheduler", 00:14:22.259 "config": [ 00:14:22.259 { 00:14:22.259 "method": "framework_set_scheduler", 00:14:22.259 "params": { 00:14:22.259 "name": "static" 00:14:22.259 } 00:14:22.259 } 00:14:22.259 ] 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "subsystem": "nvmf", 00:14:22.259 "config": [ 00:14:22.259 { 00:14:22.259 "method": "nvmf_set_config", 00:14:22.259 "params": { 00:14:22.259 "discovery_filter": "match_any", 00:14:22.259 "admin_cmd_passthru": { 00:14:22.259 "identify_ctrlr": false 00:14:22.259 } 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_set_max_subsystems", 00:14:22.259 "params": { 00:14:22.259 "max_subsystems": 1024 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_set_crdt", 00:14:22.259 "params": { 00:14:22.259 "crdt1": 0, 00:14:22.259 "crdt2": 0, 00:14:22.259 "crdt3": 0 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_create_transport", 00:14:22.259 "params": { 00:14:22.259 "trtype": "TCP", 00:14:22.259 "max_queue_depth": 128, 00:14:22.259 "max_io_qpairs_per_ctrlr": 127, 00:14:22.259 "in_capsule_data_size": 4096, 00:14:22.259 "max_io_size": 131072, 00:14:22.259 "io_unit_size": 131072, 00:14:22.259 "max_aq_depth": 128, 00:14:22.259 "num_shared_buffers": 511, 00:14:22.259 "buf_cache_size": 4294967295, 00:14:22.259 "dif_insert_or_strip": false, 00:14:22.259 "zcopy": false, 00:14:22.259 "c2h_success": false, 00:14:22.259 "sock_priority": 0, 00:14:22.259 "abort_timeout_sec": 1, 00:14:22.259 "ack_timeout": 0, 00:14:22.259 "data_wr_pool_size": 0 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_create_subsystem", 00:14:22.259 "params": { 00:14:22.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.259 "allow_any_host": false, 00:14:22.259 "serial_number": "00000000000000000000", 00:14:22.259 "model_number": "SPDK bdev Controller", 00:14:22.259 "max_namespaces": 32, 00:14:22.259 "min_cntlid": 1, 00:14:22.259 "max_cntlid": 65519, 00:14:22.259 "ana_reporting": false 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_subsystem_add_host", 00:14:22.259 "params": { 00:14:22.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.259 "host": "nqn.2016-06.io.spdk:host1", 00:14:22.259 "psk": "key0" 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_subsystem_add_ns", 00:14:22.259 "params": { 00:14:22.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.259 "namespace": { 00:14:22.259 "nsid": 1, 00:14:22.259 "bdev_name": "malloc0", 00:14:22.259 "nguid": "EA4EFB053F254DF5B954104201E14EF0", 00:14:22.259 "uuid": "ea4efb05-3f25-4df5-b954-104201e14ef0", 00:14:22.259 "no_auto_visible": false 00:14:22.259 } 00:14:22.259 } 00:14:22.259 }, 00:14:22.259 { 00:14:22.259 "method": "nvmf_subsystem_add_listener", 00:14:22.259 "params": { 00:14:22.259 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.259 "listen_address": { 00:14:22.259 "trtype": "TCP", 00:14:22.259 "adrfam": "IPv4", 00:14:22.259 "traddr": "10.0.0.3", 00:14:22.259 "trsvcid": "4420" 00:14:22.259 }, 00:14:22.259 "secure_channel": false, 00:14:22.260 "sock_impl": "ssl" 00:14:22.260 } 00:14:22.260 } 00:14:22.260 ] 00:14:22.260 } 00:14:22.260 ] 00:14:22.260 }' 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@501 -- # nvmfpid=82458 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # waitforlisten 82458 00:14:22.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82458 ']' 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:22.260 20:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.260 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:22.260 [2024-08-11 20:55:32.985939] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:22.260 [2024-08-11 20:55:32.986156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.518 [2024-08-11 20:55:33.118396] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.518 [2024-08-11 20:55:33.186660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.518 [2024-08-11 20:55:33.186882] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.518 [2024-08-11 20:55:33.186900] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.519 [2024-08-11 20:55:33.186909] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.519 [2024-08-11 20:55:33.186917] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.519 [2024-08-11 20:55:33.187026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.776 [2024-08-11 20:55:33.349629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.776 [2024-08-11 20:55:33.419132] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.776 [2024-08-11 20:55:33.451097] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.776 [2024-08-11 20:55:33.457869] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=82490 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 82490 /var/tmp/bdevperf.sock 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82490 ']' 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:23.342 "subsystems": [ 00:14:23.342 { 00:14:23.342 "subsystem": "keyring", 00:14:23.342 "config": [ 00:14:23.342 { 00:14:23.342 "method": "keyring_file_add_key", 00:14:23.342 "params": { 00:14:23.342 "name": "key0", 00:14:23.342 "path": "/tmp/tmp.Mvcj9h1dED" 00:14:23.342 } 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "subsystem": "iobuf", 00:14:23.342 "config": [ 00:14:23.342 { 00:14:23.342 "method": "iobuf_set_options", 00:14:23.342 "params": { 00:14:23.342 "small_pool_count": 8192, 00:14:23.342 "large_pool_count": 1024, 00:14:23.342 "small_bufsize": 8192, 00:14:23.342 "large_bufsize": 135168 00:14:23.342 } 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "subsystem": "sock", 00:14:23.342 "config": [ 00:14:23.342 { 00:14:23.342 "method": "sock_set_default_impl", 00:14:23.342 "params": { 00:14:23.342 "impl_name": "uring" 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "method": "sock_impl_set_options", 00:14:23.342 "params": { 00:14:23.342 "impl_name": "ssl", 00:14:23.342 "recv_buf_size": 4096, 00:14:23.342 "send_buf_size": 4096, 00:14:23.342 "enable_recv_pipe": true, 00:14:23.342 "enable_quickack": false, 00:14:23.342 "enable_placement_id": 0, 00:14:23.342 "enable_zerocopy_send_server": true, 00:14:23.342 "enable_zerocopy_send_client": false, 00:14:23.342 "zerocopy_threshold": 0, 00:14:23.342 "tls_version": 0, 00:14:23.342 "enable_ktls": false 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "method": "sock_impl_set_options", 00:14:23.342 "params": { 00:14:23.342 "impl_name": "posix", 00:14:23.342 "recv_buf_size": 2097152, 00:14:23.342 "send_buf_size": 2097152, 00:14:23.342 "enable_recv_pipe": true, 00:14:23.342 "enable_quickack": false, 00:14:23.342 "enable_placement_id": 0, 00:14:23.342 "enable_zerocopy_send_server": true, 00:14:23.342 "enable_zerocopy_send_client": false, 00:14:23.342 "zerocopy_threshold": 0, 00:14:23.342 "tls_version": 0, 00:14:23.342 "enable_ktls": false 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "method": "sock_impl_set_options", 00:14:23.342 "params": { 00:14:23.342 "impl_name": "uring", 00:14:23.342 "recv_buf_size": 2097152, 00:14:23.342 "send_buf_size": 2097152, 00:14:23.342 "enable_recv_pipe": true, 00:14:23.342 "enable_quickack": false, 00:14:23.342 "enable_placement_id": 0, 00:14:23.342 "enable_zerocopy_send_server": false, 00:14:23.342 "enable_zerocopy_send_client": false, 00:14:23.342 "zerocopy_threshold": 0, 00:14:23.342 "tls_version": 0, 00:14:23.342 "enable_ktls": false 00:14:23.342 } 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "subsystem": "vmd", 00:14:23.342 "config": [] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "subsystem": "accel", 00:14:23.342 "config": [ 00:14:23.342 { 00:14:23.342 "method": "accel_set_options", 00:14:23.342 "params": { 00:14:23.342 "small_cache_size": 128, 00:14:23.342 "large_cache_size": 16, 00:14:23.342 "task_count": 2048, 00:14:23.342 "sequence_count": 2048, 00:14:23.342 "buf_count": 2048 00:14:23.342 } 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "subsystem": "bdev", 00:14:23.342 "config": [ 00:14:23.342 { 00:14:23.342 "method": "bdev_set_options", 00:14:23.342 "params": { 00:14:23.342 "bdev_io_pool_size": 65535, 00:14:23.342 "bdev_io_cache_size": 256, 00:14:23.342 "bdev_auto_examine": true, 00:14:23.342 "iobuf_small_cache_size": 128, 00:14:23.342 "iobuf_large_cache_size": 16 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "method": "bdev_raid_set_options", 00:14:23.342 "params": { 00:14:23.342 "process_window_size_kb": 1024, 00:14:23.342 "process_max_bandwidth_mb_sec": 0 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "method": "bdev_iscsi_set_options", 00:14:23.342 "params": { 00:14:23.342 "timeout_sec": 30 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "method": "bdev_nvme_set_options", 00:14:23.342 "params": { 00:14:23.342 "action_on_timeout": "none", 00:14:23.342 "timeout_us": 0, 00:14:23.342 "timeout_admin_us": 0, 00:14:23.342 "keep_alive_timeout_ms": 10000, 00:14:23.342 "arbitration_burst": 0, 00:14:23.342 "low_priority_weight": 0, 00:14:23.342 "medium_priority_weight": 0, 00:14:23.342 "high_priority_weight": 0, 00:14:23.342 "nvme_adminq_poll_period_us": 10000, 00:14:23.342 "nvme_ioq_poll_period_us": 0, 00:14:23.342 "io_queue_requests": 512, 00:14:23.342 "delay_cmd_submit": true, 00:14:23.342 "transport_retry_count": 4, 00:14:23.342 "bdev_retry_count": 3, 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.342 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.342 "transport_ack_timeout": 0, 00:14:23.342 "ctrlr_loss_timeout_sec": 0, 00:14:23.342 "reconnect_delay_sec": 0, 00:14:23.342 "fast_io_fail_timeout_sec": 0, 00:14:23.342 "disable_auto_failback": false, 00:14:23.342 "generate_uuids": false, 00:14:23.342 "transport_tos": 0, 00:14:23.342 "nvme_error_stat": false, 00:14:23.342 "rdma_srq_size": 0, 00:14:23.342 "io_path_stat": false, 00:14:23.342 "allow_accel_sequence": false, 00:14:23.342 "rdma_max_cq_size": 0, 00:14:23.342 "rdma_cm_event_timeout_ms": 0, 00:14:23.342 "dhchap_digests": [ 00:14:23.342 "sha256", 00:14:23.342 "sha384", 00:14:23.342 "sha512" 00:14:23.342 ], 00:14:23.342 "dhchap_dhgroups": [ 00:14:23.342 "null", 00:14:23.342 "ffdhe2048", 00:14:23.342 "ffdhe3072", 00:14:23.342 "ffdhe4096", 00:14:23.342 "ffdhe6144", 00:14:23.342 "ffdhe8192" 00:14:23.342 ] 00:14:23.342 } 00:14:23.342 }, 00:14:23.342 { 00:14:23.343 "method": "bdev_nvme_attach_controller", 00:14:23.343 "params": { 00:14:23.343 "name": "nvme0", 00:14:23.343 "trtype": "TCP", 00:14:23.343 "adrfam": "IPv4", 00:14:23.343 "traddr": "10.0.0.3", 00:14:23.343 "trsvcid": "4420", 00:14:23.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.343 "prchk_reftag": false, 00:14:23.343 "prchk_guard": false, 00:14:23.343 "ctrlr_loss_timeout_sec": 0, 00:14:23.343 "reconnect_delay_sec": 0, 00:14:23.343 "fast_io_fail_timeout_sec": 0, 00:14:23.343 "psk": "key0", 00:14:23.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.343 "hdgst": false, 00:14:23.343 "ddgst": false 00:14:23.343 } 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "method": "bdev_nvme_set_hotplug", 00:14:23.343 "params": { 00:14:23.343 "period_us": 100000, 00:14:23.343 "enable": false 00:14:23.343 } 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "method": "bdev_enable_histogram", 00:14:23.343 "params": { 00:14:23.343 "name": "nvme0n1", 00:14:23.343 "enable": true 00:14:23.343 } 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "method": "bdev_wait_for_examine" 00:14:23.343 } 00:14:23.343 ] 00:14:23.343 }, 00:14:23.343 { 00:14:23.343 "subsystem": "nbd", 00:14:23.343 "config": [] 00:14:23.343 } 00:14:23.343 ] 00:14:23.343 }' 00:14:23.343 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.343 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.343 20:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.343 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:23.343 [2024-08-11 20:55:33.990347] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:23.343 [2024-08-11 20:55:33.990430] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82490 ] 00:14:23.343 [2024-08-11 20:55:34.113673] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.601 [2024-08-11 20:55:34.170576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.601 [2024-08-11 20:55:34.304374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.601 [2024-08-11 20:55:34.346169] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.169 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.169 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:24.169 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:24.169 20:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:24.428 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.428 20:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.700 Running I/O for 1 seconds... 00:14:25.634 00:14:25.634 Latency(us) 00:14:25.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.634 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:25.634 Verification LBA range: start 0x0 length 0x2000 00:14:25.634 nvme0n1 : 1.01 4893.59 19.12 0.00 0.00 25941.04 4766.25 21686.46 00:14:25.634 =================================================================================================================== 00:14:25.634 Total : 4893.59 19.12 0.00 0.00 25941.04 4766.25 21686.46 00:14:25.634 0 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:25.634 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:25.634 nvmf_trace.0 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 82490 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82490 ']' 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82490 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82490 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:25.893 killing process with pid 82490 00:14:25.893 Received shutdown signal, test time was about 1.000000 seconds 00:14:25.893 00:14:25.893 Latency(us) 00:14:25.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.893 =================================================================================================================== 00:14:25.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82490' 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82490 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82490 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # nvmfcleanup 00:14:25.893 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.152 rmmod nvme_tcp 00:14:26.152 rmmod nvme_fabrics 00:14:26.152 rmmod nvme_keyring 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # '[' -n 82458 ']' 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # killprocess 82458 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82458 ']' 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82458 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82458 00:14:26.152 killing process with pid 82458 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82458' 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82458 00:14:26.152 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82458 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # iptr 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@783 -- # iptables-save 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@783 -- # iptables-restore 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:14:26.410 20:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # remove_spdk_ns 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.410 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.669 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # return 0 00:14:26.669 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bEvp2YXQdm /tmp/tmp.SaP6PgEHDC /tmp/tmp.Mvcj9h1dED 00:14:26.669 00:14:26.669 real 1m20.203s 00:14:26.669 user 2m5.716s 00:14:26.670 sys 0m27.032s 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.670 ************************************ 00:14:26.670 END TEST nvmf_tls 00:14:26.670 ************************************ 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.670 ************************************ 00:14:26.670 START TEST nvmf_fips 00:14:26.670 ************************************ 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:26.670 * Looking for test storage... 00:14:26.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=1 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:26.670 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:26.671 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # local es=0 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@634 -- # local arg=openssl 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # type -t openssl 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -P openssl 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # arg=/usr/bin/openssl 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # [[ -x /usr/bin/openssl ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@649 -- # openssl md5 /dev/fd/62 00:14:26.930 Error setting digest 00:14:26.930 4072A3B9C87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:26.930 4072A3B9C87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@649 -- # es=1 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # prepare_net_devs 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # local -g is_hw=no 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # remove_spdk_ns 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # nvmf_veth_init 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:14:26.930 Cannot find device "nvmf_init_br" 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:14:26.930 Cannot find device "nvmf_init_br2" 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:14:26.930 Cannot find device "nvmf_tgt_br" 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # true 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.930 Cannot find device "nvmf_tgt_br2" 00:14:26.930 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:14:26.931 Cannot find device "nvmf_init_br" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:14:26.931 Cannot find device "nvmf_init_br2" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:14:26.931 Cannot find device "nvmf_tgt_br" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:14:26.931 Cannot find device "nvmf_tgt_br2" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:14:26.931 Cannot find device "nvmf_br" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:14:26.931 Cannot find device "nvmf_init_if" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:14:26.931 Cannot find device "nvmf_init_if2" 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.931 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:14:27.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:27.190 00:14:27.190 --- 10.0.0.3 ping statistics --- 00:14:27.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.190 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:14:27.190 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:27.190 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:14:27.190 00:14:27.190 --- 10.0.0.4 ping statistics --- 00:14:27.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.190 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:27.190 00:14:27.190 --- 10.0.0.1 ping statistics --- 00:14:27.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.190 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:27.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:27.190 00:14:27.190 --- 10.0.0.2 ping statistics --- 00:14:27.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.190 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@453 -- # return 0 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@501 -- # nvmfpid=82790 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # waitforlisten 82790 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 82790 ']' 00:14:27.190 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.191 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.191 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.191 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.191 20:55:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:27.449 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:27.449 [2024-08-11 20:55:38.025461] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:27.449 [2024-08-11 20:55:38.025773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.449 [2024-08-11 20:55:38.164617] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.708 [2024-08-11 20:55:38.258934] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.708 [2024-08-11 20:55:38.259149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.708 [2024-08-11 20:55:38.259360] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.708 [2024-08-11 20:55:38.259626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.708 [2024-08-11 20:55:38.259793] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.708 [2024-08-11 20:55:38.259867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.708 [2024-08-11 20:55:38.320117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:28.642 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.642 [2024-08-11 20:55:39.413235] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.900 [2024-08-11 20:55:39.429174] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:28.900 [2024-08-11 20:55:39.429380] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:28.900 [2024-08-11 20:55:39.460680] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:28.900 malloc0 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=82835 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 82835 /var/tmp/bdevperf.sock 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 82835 ']' 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:28.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:28.900 20:55:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:28.900 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:28.900 [2024-08-11 20:55:39.591023] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:28.900 [2024-08-11 20:55:39.591367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82835 ] 00:14:29.159 [2024-08-11 20:55:39.735637] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.159 [2024-08-11 20:55:39.802880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.159 [2024-08-11 20:55:39.860073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.123 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.123 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:14:30.123 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:30.124 [2024-08-11 20:55:40.782443] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:30.124 [2024-08-11 20:55:40.782531] nvme_tcp.c:2594:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:30.124 TLSTESTn1 00:14:30.124 20:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:30.383 Running I/O for 10 seconds... 00:14:40.359 00:14:40.359 Latency(us) 00:14:40.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.359 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:40.359 Verification LBA range: start 0x0 length 0x2000 00:14:40.359 TLSTESTn1 : 10.02 3759.44 14.69 0.00 0.00 33976.17 4796.04 23473.80 00:14:40.359 =================================================================================================================== 00:14:40.359 Total : 3759.44 14.69 0.00 0.00 33976.17 4796.04 23473.80 00:14:40.359 0 00:14:40.359 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:40.359 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:40.359 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:14:40.359 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:14:40.359 20:55:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:40.359 nvmf_trace.0 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 82835 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 82835 ']' 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 82835 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82835 00:14:40.359 killing process with pid 82835 00:14:40.359 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.359 00:14:40.359 Latency(us) 00:14:40.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.359 =================================================================================================================== 00:14:40.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82835' 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@965 -- # kill 82835 00:14:40.359 [2024-08-11 20:55:51.135864] app.c:1025:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:40.359 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # wait 82835 00:14:40.618 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:40.618 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # nvmfcleanup 00:14:40.618 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:40.618 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.618 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:40.619 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.619 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.619 rmmod nvme_tcp 00:14:40.619 rmmod nvme_fabrics 00:14:40.878 rmmod nvme_keyring 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # '[' -n 82790 ']' 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # killprocess 82790 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 82790 ']' 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 82790 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82790 00:14:40.878 killing process with pid 82790 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82790' 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@965 -- # kill 82790 00:14:40.878 [2024-08-11 20:55:51.456084] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # wait 82790 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:14:40.878 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # iptr 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@783 -- # iptables-save 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@783 -- # iptables-restore 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # remove_spdk_ns 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # return 0 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:41.137 ************************************ 00:14:41.137 END TEST nvmf_fips 00:14:41.137 ************************************ 00:14:41.137 00:14:41.137 real 0m14.652s 00:14:41.137 user 0m19.807s 00:14:41.137 sys 0m5.984s 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.137 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:41.397 20:55:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:41.397 20:55:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:41.397 20:55:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.397 20:55:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.397 ************************************ 00:14:41.397 START TEST nvmf_control_msg_list 00:14:41.397 ************************************ 00:14:41.397 20:55:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:41.397 * Looking for test storage... 00:14:41.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.397 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # : 0 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@468 -- # prepare_net_devs 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # local -g is_hw=no 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # remove_spdk_ns 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # nvmf_veth_init 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:14:41.398 Cannot find device "nvmf_init_br" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:14:41.398 Cannot find device "nvmf_init_br2" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:14:41.398 Cannot find device "nvmf_tgt_br" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.398 Cannot find device "nvmf_tgt_br2" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@161 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:14:41.398 Cannot find device "nvmf_init_br" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:14:41.398 Cannot find device "nvmf_init_br2" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:14:41.398 Cannot find device "nvmf_tgt_br" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:14:41.398 Cannot find device "nvmf_tgt_br2" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:14:41.398 Cannot find device "nvmf_br" 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:41.398 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:14:41.658 Cannot find device "nvmf_init_if" 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:14:41.658 Cannot find device "nvmf_init_if2" 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.658 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:41.658 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:14:41.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:41.918 00:14:41.918 --- 10.0.0.3 ping statistics --- 00:14:41.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.918 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:14:41.918 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:41.918 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:14:41.918 00:14:41.918 --- 10.0.0.4 ping statistics --- 00:14:41.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.918 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:41.918 00:14:41.918 --- 10.0.0.1 ping statistics --- 00:14:41.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.918 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:41.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:14:41.918 00:14:41.918 --- 10.0.0.2 ping statistics --- 00:14:41.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.918 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@453 -- # return 0 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:14:41.918 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@501 -- # nvmfpid=83201 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # waitforlisten 83201 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@827 -- # '[' -z 83201 ']' 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:41.919 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:41.919 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:41.919 [2024-08-11 20:55:52.560468] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:41.919 [2024-08-11 20:55:52.560542] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.178 [2024-08-11 20:55:52.699695] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.178 [2024-08-11 20:55:52.764037] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.178 [2024-08-11 20:55:52.764105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.178 [2024-08-11 20:55:52.764120] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.178 [2024-08-11 20:55:52.764131] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.178 [2024-08-11 20:55:52.764140] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.178 [2024-08-11 20:55:52.764181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.178 [2024-08-11 20:55:52.820171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # return 0 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:42.178 [2024-08-11 20:55:52.926460] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:42.178 Malloc0 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:42.178 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:42.438 [2024-08-11 20:55:52.965816] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=83230 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=83231 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=83232 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 83230 00:14:42.438 20:55:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:42.438 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:42.438 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:42.438 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:42.438 [2024-08-11 20:55:53.154246] subsystem.c:1586:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:42.438 [2024-08-11 20:55:53.154747] subsystem.c:1586:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:42.438 [2024-08-11 20:55:53.155167] subsystem.c:1586:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:43.815 Initializing NVMe Controllers 00:14:43.815 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:43.815 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:43.815 Initialization complete. Launching workers. 00:14:43.815 ======================================================== 00:14:43.815 Latency(us) 00:14:43.815 Device Information : IOPS MiB/s Average min max 00:14:43.815 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4113.00 16.07 242.83 197.46 1512.35 00:14:43.815 ======================================================== 00:14:43.815 Total : 4113.00 16.07 242.83 197.46 1512.35 00:14:43.815 00:14:43.815 Initializing NVMe Controllers 00:14:43.815 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:43.815 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:43.815 Initialization complete. Launching workers. 00:14:43.815 ======================================================== 00:14:43.815 Latency(us) 00:14:43.815 Device Information : IOPS MiB/s Average min max 00:14:43.815 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4116.00 16.08 242.57 196.91 430.38 00:14:43.815 ======================================================== 00:14:43.815 Total : 4116.00 16.08 242.57 196.91 430.38 00:14:43.815 00:14:43.815 Initializing NVMe Controllers 00:14:43.815 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:43.815 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:43.815 Initialization complete. Launching workers. 00:14:43.815 ======================================================== 00:14:43.815 Latency(us) 00:14:43.815 Device Information : IOPS MiB/s Average min max 00:14:43.815 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4108.62 16.05 243.10 140.97 780.53 00:14:43.815 ======================================================== 00:14:43.815 Total : 4108.62 16.05 243.10 140.97 780.53 00:14:43.815 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 83231 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 83232 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # nvmfcleanup 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@117 -- # sync 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@120 -- # set +e 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:43.815 rmmod nvme_tcp 00:14:43.815 rmmod nvme_fabrics 00:14:43.815 rmmod nvme_keyring 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set -e 00:14:43.815 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # return 0 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # '[' -n 83201 ']' 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # killprocess 83201 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@946 -- # '[' -z 83201 ']' 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # kill -0 83201 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@951 -- # uname 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83201 00:14:43.816 killing process with pid 83201 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83201' 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@965 -- # kill 83201 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # wait 83201 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # iptr 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@783 -- # iptables-save 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@783 -- # iptables-restore 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:14:43.816 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # remove_spdk_ns 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # return 0 00:14:44.075 00:14:44.075 real 0m2.862s 00:14:44.075 user 0m4.645s 00:14:44.075 sys 0m1.373s 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.075 ************************************ 00:14:44.075 END TEST nvmf_control_msg_list 00:14:44.075 ************************************ 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:44.075 20:55:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.335 ************************************ 00:14:44.335 START TEST nvmf_wait_for_buf 00:14:44.335 ************************************ 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:44.335 * Looking for test storage... 00:14:44.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.335 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # : 0 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@468 -- # prepare_net_devs 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # local -g is_hw=no 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # remove_spdk_ns 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # nvmf_veth_init 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:14:44.336 Cannot find device "nvmf_init_br" 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # true 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:14:44.336 Cannot find device "nvmf_init_br2" 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # true 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:14:44.336 Cannot find device "nvmf_tgt_br" 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # true 00:14:44.336 20:55:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.336 Cannot find device "nvmf_tgt_br2" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@161 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:14:44.336 Cannot find device "nvmf_init_br" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:14:44.336 Cannot find device "nvmf_init_br2" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:14:44.336 Cannot find device "nvmf_tgt_br" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:14:44.336 Cannot find device "nvmf_tgt_br2" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:14:44.336 Cannot find device "nvmf_br" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:14:44.336 Cannot find device "nvmf_init_if" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:14:44.336 Cannot find device "nvmf_init_if2" 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:44.336 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:44.596 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:14:44.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:44.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:14:44.597 00:14:44.597 --- 10.0.0.3 ping statistics --- 00:14:44.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.597 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:14:44.597 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:44.597 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:14:44.597 00:14:44.597 --- 10.0.0.4 ping statistics --- 00:14:44.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.597 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:44.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:14:44.597 00:14:44.597 --- 10.0.0.1 ping statistics --- 00:14:44.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.597 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:44.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:44.597 00:14:44.597 --- 10.0.0.2 ping statistics --- 00:14:44.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.597 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@453 -- # return 0 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@501 -- # nvmfpid=83454 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # waitforlisten 83454 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@827 -- # '[' -z 83454 ']' 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:44.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:44.597 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:44.856 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:44.856 [2024-08-11 20:55:55.428532] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:44.856 [2024-08-11 20:55:55.428647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.856 [2024-08-11 20:55:55.570058] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.115 [2024-08-11 20:55:55.635736] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.116 [2024-08-11 20:55:55.635791] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.116 [2024-08-11 20:55:55.635807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.116 [2024-08-11 20:55:55.635817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.116 [2024-08-11 20:55:55.635826] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.116 [2024-08-11 20:55:55.635861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # return 0 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 [2024-08-11 20:55:55.768584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 Malloc0 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 [2024-08-11 20:55:55.830821] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:45.116 [2024-08-11 20:55:55.854848] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:45.116 20:55:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:45.116 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:45.375 [2024-08-11 20:55:56.028743] subsystem.c:1586:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:46.756 Initializing NVMe Controllers 00:14:46.756 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:46.756 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:46.756 Initialization complete. Launching workers. 00:14:46.756 ======================================================== 00:14:46.756 Latency(us) 00:14:46.756 Device Information : IOPS MiB/s Average min max 00:14:46.756 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 505.96 63.24 7905.99 6028.90 9971.21 00:14:46.756 ======================================================== 00:14:46.756 Total : 505.96 63.24 7905.99 6028.90 9971.21 00:14:46.756 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # nvmfcleanup 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@117 -- # sync 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@120 -- # set +e 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.756 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.756 rmmod nvme_tcp 00:14:46.756 rmmod nvme_fabrics 00:14:47.018 rmmod nvme_keyring 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set -e 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # return 0 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # '[' -n 83454 ']' 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # killprocess 83454 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@946 -- # '[' -z 83454 ']' 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # kill -0 83454 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@951 -- # uname 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83454 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:47.018 killing process with pid 83454 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83454' 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@965 -- # kill 83454 00:14:47.018 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # wait 83454 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # iptr 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@783 -- # iptables-save 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@783 -- # iptables-restore 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:14:47.277 20:55:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.277 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.277 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # remove_spdk_ns 00:14:47.277 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.277 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.277 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # return 0 00:14:47.537 00:14:47.537 real 0m3.208s 00:14:47.537 user 0m2.504s 00:14:47.537 sys 0m0.814s 00:14:47.537 ************************************ 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:47.537 END TEST nvmf_wait_for_buf 00:14:47.537 ************************************ 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.537 ************************************ 00:14:47.537 START TEST nvmf_fuzz 00:14:47.537 ************************************ 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:14:47.537 * Looking for test storage... 00:14:47.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # prepare_net_devs 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # local -g is_hw=no 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # remove_spdk_ns 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@452 -- # nvmf_veth_init 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.537 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:14:47.538 Cannot find device "nvmf_init_br" 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:14:47.538 Cannot find device "nvmf_init_br2" 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:14:47.538 Cannot find device "nvmf_tgt_br" 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # true 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.538 Cannot find device "nvmf_tgt_br2" 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@161 -- # true 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:14:47.538 Cannot find device "nvmf_init_br" 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:14:47.538 Cannot find device "nvmf_init_br2" 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:14:47.538 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:14:47.796 Cannot find device "nvmf_tgt_br" 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:14:47.797 Cannot find device "nvmf_tgt_br2" 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:14:47.797 Cannot find device "nvmf_br" 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:14:47.797 Cannot find device "nvmf_init_if" 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:14:47.797 Cannot find device "nvmf_init_if2" 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.797 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:47.797 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:14:48.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:14:48.056 00:14:48.056 --- 10.0.0.3 ping statistics --- 00:14:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.056 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:14:48.056 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:48.056 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:48.056 00:14:48.056 --- 10.0.0.4 ping statistics --- 00:14:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.056 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:48.056 00:14:48.056 --- 10.0.0.1 ping statistics --- 00:14:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.056 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:48.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:48.056 00:14:48.056 --- 10.0.0.2 ping statistics --- 00:14:48.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.056 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@453 -- # return 0 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=83708 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 83708 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 83708 ']' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:48.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:48.056 20:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.314 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:48.314 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:14:48.314 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.315 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:48.315 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.573 Malloc0 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:14:48.573 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:14:48.831 Shutting down the fuzz application 00:14:48.831 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:14:49.089 Shutting down the fuzz application 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@508 -- # nvmfcleanup 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.089 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.089 rmmod nvme_tcp 00:14:49.089 rmmod nvme_fabrics 00:14:49.347 rmmod nvme_keyring 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@509 -- # '[' -n 83708 ']' 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@510 -- # killprocess 83708 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 83708 ']' 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 83708 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83708 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:49.347 killing process with pid 83708 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83708' 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 83708 00:14:49.347 20:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 83708 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # iptr 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@783 -- # iptables-save 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@783 -- # iptables-restore 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:14:49.605 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:14:49.606 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.606 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # remove_spdk_ns 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # return 0 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:14:49.864 ************************************ 00:14:49.864 END TEST nvmf_fuzz 00:14:49.864 ************************************ 00:14:49.864 00:14:49.864 real 0m2.324s 00:14:49.864 user 0m1.929s 00:14:49.864 sys 0m0.759s 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:49.864 ************************************ 00:14:49.864 START TEST nvmf_multiconnection 00:14:49.864 ************************************ 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:14:49.864 * Looking for test storage... 00:14:49.864 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:14:49.864 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # prepare_net_devs 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # local -g is_hw=no 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # remove_spdk_ns 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@452 -- # nvmf_veth_init 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:14:49.865 Cannot find device "nvmf_init_br" 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:14:49.865 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:14:50.124 Cannot find device "nvmf_init_br2" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:14:50.124 Cannot find device "nvmf_tgt_br" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.124 Cannot find device "nvmf_tgt_br2" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@161 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:14:50.124 Cannot find device "nvmf_init_br" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:14:50.124 Cannot find device "nvmf_init_br2" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:14:50.124 Cannot find device "nvmf_tgt_br" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:14:50.124 Cannot find device "nvmf_tgt_br2" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:14:50.124 Cannot find device "nvmf_br" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:14:50.124 Cannot find device "nvmf_init_if" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:14:50.124 Cannot find device "nvmf_init_if2" 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:50.124 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:14:50.382 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:50.382 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:50.382 00:14:50.382 --- 10.0.0.3 ping statistics --- 00:14:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.382 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:14:50.382 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:50.382 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:14:50.382 00:14:50.382 --- 10.0.0.4 ping statistics --- 00:14:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.382 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:50.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:50.382 00:14:50.382 --- 10.0.0.1 ping statistics --- 00:14:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.382 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:50.382 20:56:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:50.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:14:50.382 00:14:50.382 --- 10.0.0.2 ping statistics --- 00:14:50.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.382 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@453 -- # return 0 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@501 -- # nvmfpid=83944 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # waitforlisten 83944 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 83944 ']' 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:50.382 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.382 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:14:50.382 [2024-08-11 20:56:01.088549] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:14:50.382 [2024-08-11 20:56:01.088649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.640 [2024-08-11 20:56:01.232634] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.640 [2024-08-11 20:56:01.313167] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.640 [2024-08-11 20:56:01.313538] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.640 [2024-08-11 20:56:01.313853] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.640 [2024-08-11 20:56:01.314020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.640 [2024-08-11 20:56:01.314139] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.640 [2024-08-11 20:56:01.314433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.640 [2024-08-11 20:56:01.314538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.640 [2024-08-11 20:56:01.314664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.640 [2024-08-11 20:56:01.314664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.640 [2024-08-11 20:56:01.373501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 [2024-08-11 20:56:01.482702] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 Malloc1 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 [2024-08-11 20:56:01.564064] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 Malloc2 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 Malloc3 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:50.898 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 Malloc4 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 Malloc5 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.156 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 Malloc6 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 Malloc7 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 Malloc8 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.157 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 Malloc9 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 Malloc10 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 Malloc11 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:51.415 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:51.672 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:14:51.672 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:14:51.672 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.672 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:51.672 20:56:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:53.569 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:14:53.827 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:14:53.827 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:14:53.827 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.827 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:53.827 20:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:55.727 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:14:55.985 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:14:55.985 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:14:55.985 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.985 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:55.985 20:56:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:57.952 20:56:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:00.485 20:56:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:02.390 20:56:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:15:02.390 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:02.390 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:02.390 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.390 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:02.390 20:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:04.299 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:04.299 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:04.299 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:04.557 20:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:07.090 20:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:08.994 20:56:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:10.897 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:15:11.156 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:11.156 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:11.156 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.156 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:11.156 20:56:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:13.057 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:15:13.315 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:15:13.315 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:15:13.315 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.315 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:13.315 20:56:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:15:15.220 20:56:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:15:15.220 [global] 00:15:15.220 thread=1 00:15:15.220 invalidate=1 00:15:15.220 rw=read 00:15:15.220 time_based=1 00:15:15.220 runtime=10 00:15:15.220 ioengine=libaio 00:15:15.220 direct=1 00:15:15.220 bs=262144 00:15:15.220 iodepth=64 00:15:15.220 norandommap=1 00:15:15.220 numjobs=1 00:15:15.220 00:15:15.479 [job0] 00:15:15.479 filename=/dev/nvme0n1 00:15:15.479 [job1] 00:15:15.479 filename=/dev/nvme10n1 00:15:15.479 [job2] 00:15:15.479 filename=/dev/nvme1n1 00:15:15.479 [job3] 00:15:15.479 filename=/dev/nvme2n1 00:15:15.479 [job4] 00:15:15.479 filename=/dev/nvme3n1 00:15:15.479 [job5] 00:15:15.479 filename=/dev/nvme4n1 00:15:15.479 [job6] 00:15:15.479 filename=/dev/nvme5n1 00:15:15.479 [job7] 00:15:15.479 filename=/dev/nvme6n1 00:15:15.479 [job8] 00:15:15.479 filename=/dev/nvme7n1 00:15:15.479 [job9] 00:15:15.479 filename=/dev/nvme8n1 00:15:15.479 [job10] 00:15:15.479 filename=/dev/nvme9n1 00:15:15.479 Could not set queue depth (nvme0n1) 00:15:15.480 Could not set queue depth (nvme10n1) 00:15:15.480 Could not set queue depth (nvme1n1) 00:15:15.480 Could not set queue depth (nvme2n1) 00:15:15.480 Could not set queue depth (nvme3n1) 00:15:15.480 Could not set queue depth (nvme4n1) 00:15:15.480 Could not set queue depth (nvme5n1) 00:15:15.480 Could not set queue depth (nvme6n1) 00:15:15.480 Could not set queue depth (nvme7n1) 00:15:15.480 Could not set queue depth (nvme8n1) 00:15:15.480 Could not set queue depth (nvme9n1) 00:15:15.738 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.738 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.738 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:15.739 fio-3.35 00:15:15.739 Starting 11 threads 00:15:27.970 00:15:27.970 job0: (groupid=0, jobs=1): err= 0: pid=84391: Sun Aug 11 20:56:36 2024 00:15:27.970 read: IOPS=307, BW=77.0MiB/s (80.7MB/s)(779MiB/10125msec) 00:15:27.971 slat (usec): min=21, max=90325, avg=3202.74, stdev=8357.81 00:15:27.971 clat (msec): min=21, max=517, avg=204.26, stdev=92.49 00:15:27.971 lat (msec): min=23, max=531, avg=207.47, stdev=93.82 00:15:27.971 clat percentiles (msec): 00:15:27.971 | 1.00th=[ 88], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 117], 00:15:27.971 | 30.00th=[ 125], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 197], 00:15:27.971 | 70.00th=[ 215], 80.00th=[ 313], 90.00th=[ 359], 95.00th=[ 372], 00:15:27.971 | 99.00th=[ 405], 99.50th=[ 430], 99.90th=[ 485], 99.95th=[ 518], 00:15:27.971 | 99.99th=[ 518] 00:15:27.971 bw ( KiB/s): min=41984, max=145408, per=10.48%, avg=78154.25, stdev=34427.48, samples=20 00:15:27.971 iops : min= 164, max= 568, avg=305.25, stdev=134.47, samples=20 00:15:27.971 lat (msec) : 50=0.74%, 100=1.67%, 250=72.25%, 500=25.28%, 750=0.06% 00:15:27.971 cpu : usr=0.21%, sys=1.41%, ctx=669, majf=0, minf=4097 00:15:27.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:15:27.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.971 issued rwts: total=3117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.971 job1: (groupid=0, jobs=1): err= 0: pid=84392: Sun Aug 11 20:56:36 2024 00:15:27.971 read: IOPS=129, BW=32.4MiB/s (34.0MB/s)(329MiB/10157msec) 00:15:27.971 slat (usec): min=22, max=141523, avg=7628.08, stdev=20236.70 00:15:27.971 clat (msec): min=17, max=785, avg=485.65, stdev=133.63 00:15:27.971 lat (msec): min=17, max=785, avg=493.28, stdev=135.32 00:15:27.971 clat percentiles (msec): 00:15:27.971 | 1.00th=[ 54], 5.00th=[ 222], 10.00th=[ 266], 20.00th=[ 397], 00:15:27.971 | 30.00th=[ 456], 40.00th=[ 489], 50.00th=[ 518], 60.00th=[ 535], 00:15:27.971 | 70.00th=[ 558], 80.00th=[ 592], 90.00th=[ 634], 95.00th=[ 659], 00:15:27.971 | 99.00th=[ 701], 99.50th=[ 718], 99.90th=[ 785], 99.95th=[ 785], 00:15:27.971 | 99.99th=[ 785] 00:15:27.971 bw ( KiB/s): min=19968, max=52736, per=4.30%, avg=32057.30, stdev=7846.24, samples=20 00:15:27.971 iops : min= 78, max= 206, avg=125.10, stdev=30.63, samples=20 00:15:27.971 lat (msec) : 20=0.15%, 50=0.15%, 100=0.76%, 250=7.37%, 500=35.18% 00:15:27.971 lat (msec) : 750=56.23%, 1000=0.15% 00:15:27.971 cpu : usr=0.06%, sys=0.70%, ctx=256, majf=0, minf=4098 00:15:27.971 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:15:27.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.971 issued rwts: total=1316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.971 job2: (groupid=0, jobs=1): err= 0: pid=84398: Sun Aug 11 20:56:36 2024 00:15:27.971 read: IOPS=190, BW=47.6MiB/s (49.9MB/s)(482MiB/10126msec) 00:15:27.971 slat (usec): min=23, max=87598, avg=5192.06, stdev=12396.74 00:15:27.971 clat (msec): min=18, max=481, avg=330.54, stdev=91.98 00:15:27.971 lat (msec): min=19, max=506, avg=335.73, stdev=93.30 00:15:27.971 clat percentiles (msec): 00:15:27.971 | 1.00th=[ 36], 5.00th=[ 138], 10.00th=[ 176], 20.00th=[ 275], 00:15:27.971 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 368], 00:15:27.971 | 70.00th=[ 380], 80.00th=[ 401], 90.00th=[ 422], 95.00th=[ 439], 00:15:27.971 | 99.00th=[ 464], 99.50th=[ 472], 99.90th=[ 481], 99.95th=[ 481], 00:15:27.971 | 99.99th=[ 481] 00:15:27.971 bw ( KiB/s): min=37376, max=92857, per=6.39%, avg=47693.45, stdev=13249.86, samples=20 00:15:27.971 iops : min= 146, max= 362, avg=186.20, stdev=51.65, samples=20 00:15:27.971 lat (msec) : 20=0.10%, 50=0.99%, 100=0.31%, 250=17.13%, 500=81.47% 00:15:27.971 cpu : usr=0.06%, sys=0.99%, ctx=409, majf=0, minf=4097 00:15:27.971 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:15:27.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.971 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.971 job3: (groupid=0, jobs=1): err= 0: pid=84400: Sun Aug 11 20:56:36 2024 00:15:27.971 read: IOPS=260, BW=65.2MiB/s (68.3MB/s)(662MiB/10153msec) 00:15:27.971 slat (usec): min=21, max=492596, avg=3657.22, stdev=16278.68 00:15:27.971 clat (msec): min=26, max=819, avg=241.52, stdev=190.43 00:15:27.971 lat (msec): min=27, max=877, avg=245.18, stdev=192.30 00:15:27.971 clat percentiles (msec): 00:15:27.971 | 1.00th=[ 82], 5.00th=[ 108], 10.00th=[ 131], 20.00th=[ 144], 00:15:27.971 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:15:27.971 | 70.00th=[ 176], 80.00th=[ 262], 90.00th=[ 617], 95.00th=[ 743], 00:15:27.971 | 99.00th=[ 802], 99.50th=[ 802], 99.90th=[ 818], 99.95th=[ 818], 00:15:27.971 | 99.99th=[ 818] 00:15:27.971 bw ( KiB/s): min= 8192, max=112640, per=8.86%, avg=66087.55, stdev=40241.71, samples=20 00:15:27.971 iops : min= 32, max= 440, avg=258.10, stdev=157.21, samples=20 00:15:27.971 lat (msec) : 50=0.04%, 100=3.36%, 250=75.85%, 500=7.41%, 750=9.79% 00:15:27.971 lat (msec) : 1000=3.55% 00:15:27.971 cpu : usr=0.17%, sys=1.19%, ctx=539, majf=0, minf=4097 00:15:27.971 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:15:27.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.971 issued rwts: total=2646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.971 job4: (groupid=0, jobs=1): err= 0: pid=84401: Sun Aug 11 20:56:36 2024 00:15:27.971 read: IOPS=127, BW=31.8MiB/s (33.4MB/s)(323MiB/10153msec) 00:15:27.971 slat (usec): min=20, max=216443, avg=7755.04, stdev=20444.59 00:15:27.971 clat (msec): min=63, max=757, avg=494.28, stdev=124.01 00:15:27.971 lat (msec): min=64, max=787, avg=502.03, stdev=125.69 00:15:27.971 clat percentiles (msec): 00:15:27.971 | 1.00th=[ 67], 5.00th=[ 284], 10.00th=[ 330], 20.00th=[ 418], 00:15:27.971 | 30.00th=[ 460], 40.00th=[ 485], 50.00th=[ 510], 60.00th=[ 531], 00:15:27.971 | 70.00th=[ 550], 80.00th=[ 575], 90.00th=[ 642], 95.00th=[ 684], 00:15:27.971 | 99.00th=[ 735], 99.50th=[ 743], 99.90th=[ 760], 99.95th=[ 760], 00:15:27.971 | 99.99th=[ 760] 00:15:27.971 bw ( KiB/s): min=21504, max=46685, per=4.21%, avg=31448.05, stdev=6390.32, samples=20 00:15:27.971 iops : min= 84, max= 182, avg=122.70, stdev=24.91, samples=20 00:15:27.971 lat (msec) : 100=2.17%, 250=0.93%, 500=43.19%, 750=53.56%, 1000=0.15% 00:15:27.971 cpu : usr=0.11%, sys=0.52%, ctx=254, majf=0, minf=4097 00:15:27.971 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:15:27.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.971 issued rwts: total=1292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.971 job5: (groupid=0, jobs=1): err= 0: pid=84402: Sun Aug 11 20:56:36 2024 00:15:27.971 read: IOPS=193, BW=48.3MiB/s (50.6MB/s)(489MiB/10122msec) 00:15:27.971 slat (usec): min=22, max=71514, avg=5123.48, stdev=12117.33 00:15:27.971 clat (msec): min=18, max=494, avg=325.70, stdev=93.63 00:15:27.971 lat (msec): min=19, max=494, avg=330.82, stdev=94.94 00:15:27.971 clat percentiles (msec): 00:15:27.971 | 1.00th=[ 56], 5.00th=[ 125], 10.00th=[ 174], 20.00th=[ 228], 00:15:27.971 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 363], 00:15:27.971 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 422], 95.00th=[ 439], 00:15:27.971 | 99.00th=[ 460], 99.50th=[ 464], 99.90th=[ 489], 99.95th=[ 493], 00:15:27.971 | 99.99th=[ 493] 00:15:27.971 bw ( KiB/s): min=36864, max=93883, per=6.49%, avg=48405.75, stdev=15032.49, samples=20 00:15:27.971 iops : min= 144, max= 366, avg=188.95, stdev=58.64, samples=20 00:15:27.971 lat (msec) : 20=0.10%, 50=0.66%, 100=2.15%, 250=17.24%, 500=79.85% 00:15:27.971 cpu : usr=0.14%, sys=0.95%, ctx=429, majf=0, minf=4097 00:15:27.971 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:15:27.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.971 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.971 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.971 job6: (groupid=0, jobs=1): err= 0: pid=84403: Sun Aug 11 20:56:36 2024 00:15:27.972 read: IOPS=155, BW=38.9MiB/s (40.8MB/s)(394MiB/10119msec) 00:15:27.972 slat (usec): min=21, max=184701, avg=5932.37, stdev=16664.49 00:15:27.972 clat (msec): min=117, max=735, avg=405.00, stdev=144.19 00:15:27.972 lat (msec): min=117, max=766, avg=410.93, stdev=146.11 00:15:27.972 clat percentiles (msec): 00:15:27.972 | 1.00th=[ 136], 5.00th=[ 163], 10.00th=[ 234], 20.00th=[ 313], 00:15:27.972 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 376], 00:15:27.972 | 70.00th=[ 477], 80.00th=[ 575], 90.00th=[ 625], 95.00th=[ 659], 00:15:27.972 | 99.00th=[ 701], 99.50th=[ 709], 99.90th=[ 726], 99.95th=[ 735], 00:15:27.972 | 99.99th=[ 735] 00:15:27.972 bw ( KiB/s): min=19928, max=57344, per=5.13%, avg=38290.11, stdev=11667.13, samples=19 00:15:27.972 iops : min= 77, max= 224, avg=149.53, stdev=45.65, samples=19 00:15:27.972 lat (msec) : 250=11.56%, 500=60.23%, 750=28.21% 00:15:27.972 cpu : usr=0.11%, sys=0.62%, ctx=334, majf=0, minf=4097 00:15:27.972 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:15:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.972 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.972 issued rwts: total=1574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.972 job7: (groupid=0, jobs=1): err= 0: pid=84404: Sun Aug 11 20:56:36 2024 00:15:27.972 read: IOPS=185, BW=46.3MiB/s (48.5MB/s)(468MiB/10124msec) 00:15:27.972 slat (usec): min=20, max=109203, avg=5265.12, stdev=12710.52 00:15:27.972 clat (msec): min=22, max=477, avg=340.05, stdev=73.53 00:15:27.972 lat (msec): min=23, max=477, avg=345.31, stdev=74.54 00:15:27.972 clat percentiles (msec): 00:15:27.972 | 1.00th=[ 50], 5.00th=[ 184], 10.00th=[ 228], 20.00th=[ 309], 00:15:27.972 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 368], 00:15:27.972 | 70.00th=[ 376], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 430], 00:15:27.972 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 477], 99.95th=[ 477], 00:15:27.972 | 99.99th=[ 477] 00:15:27.972 bw ( KiB/s): min=35840, max=79872, per=6.21%, avg=46296.45, stdev=9393.18, samples=20 00:15:27.972 iops : min= 140, max= 312, avg=180.75, stdev=36.69, samples=20 00:15:27.972 lat (msec) : 50=1.07%, 100=0.27%, 250=11.59%, 500=87.08% 00:15:27.972 cpu : usr=0.10%, sys=0.82%, ctx=389, majf=0, minf=4097 00:15:27.972 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:15:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.972 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.972 issued rwts: total=1873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.972 job8: (groupid=0, jobs=1): err= 0: pid=84405: Sun Aug 11 20:56:36 2024 00:15:27.972 read: IOPS=309, BW=77.4MiB/s (81.2MB/s)(784MiB/10123msec) 00:15:27.972 slat (usec): min=21, max=63082, avg=3148.35, stdev=8089.09 00:15:27.972 clat (msec): min=19, max=432, avg=203.18, stdev=88.68 00:15:27.972 lat (msec): min=19, max=432, avg=206.33, stdev=90.00 00:15:27.972 clat percentiles (msec): 00:15:27.972 | 1.00th=[ 93], 5.00th=[ 107], 10.00th=[ 111], 20.00th=[ 118], 00:15:27.972 | 30.00th=[ 126], 40.00th=[ 176], 50.00th=[ 188], 60.00th=[ 197], 00:15:27.972 | 70.00th=[ 215], 80.00th=[ 309], 90.00th=[ 351], 95.00th=[ 372], 00:15:27.972 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 435], 00:15:27.972 | 99.99th=[ 435] 00:15:27.972 bw ( KiB/s): min=42496, max=145920, per=10.54%, avg=78608.85, stdev=33775.07, samples=20 00:15:27.972 iops : min= 166, max= 570, avg=307.00, stdev=131.96, samples=20 00:15:27.972 lat (msec) : 20=0.03%, 50=0.16%, 100=2.11%, 250=72.57%, 500=25.14% 00:15:27.972 cpu : usr=0.25%, sys=1.35%, ctx=648, majf=0, minf=4097 00:15:27.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:15:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.972 issued rwts: total=3135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.972 job9: (groupid=0, jobs=1): err= 0: pid=84406: Sun Aug 11 20:56:36 2024 00:15:27.972 read: IOPS=944, BW=236MiB/s (248MB/s)(2366MiB/10016msec) 00:15:27.972 slat (usec): min=20, max=132702, avg=1053.08, stdev=3989.63 00:15:27.972 clat (msec): min=11, max=418, avg=66.59, stdev=60.36 00:15:27.972 lat (msec): min=16, max=448, avg=67.64, stdev=61.20 00:15:27.972 clat percentiles (msec): 00:15:27.972 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 36], 00:15:27.972 | 30.00th=[ 37], 40.00th=[ 39], 50.00th=[ 40], 60.00th=[ 42], 00:15:27.972 | 70.00th=[ 44], 80.00th=[ 125], 90.00th=[ 157], 95.00th=[ 169], 00:15:27.972 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:15:27.972 | 99.99th=[ 418] 00:15:27.972 bw ( KiB/s): min=40529, max=466432, per=32.25%, avg=240597.80, stdev=168225.38, samples=20 00:15:27.972 iops : min= 158, max= 1822, avg=939.80, stdev=657.17, samples=20 00:15:27.972 lat (msec) : 20=0.03%, 50=77.71%, 100=1.22%, 250=19.16%, 500=1.88% 00:15:27.972 cpu : usr=0.42%, sys=3.23%, ctx=1951, majf=0, minf=4097 00:15:27.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.972 issued rwts: total=9462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.972 job10: (groupid=0, jobs=1): err= 0: pid=84407: Sun Aug 11 20:56:36 2024 00:15:27.972 read: IOPS=128, BW=32.1MiB/s (33.7MB/s)(326MiB/10154msec) 00:15:27.972 slat (usec): min=22, max=193931, avg=7677.98, stdev=21086.99 00:15:27.972 clat (msec): min=27, max=806, avg=489.47, stdev=151.89 00:15:27.972 lat (msec): min=28, max=868, avg=497.15, stdev=153.72 00:15:27.972 clat percentiles (msec): 00:15:27.972 | 1.00th=[ 114], 5.00th=[ 178], 10.00th=[ 257], 20.00th=[ 347], 00:15:27.972 | 30.00th=[ 435], 40.00th=[ 481], 50.00th=[ 514], 60.00th=[ 550], 00:15:27.972 | 70.00th=[ 584], 80.00th=[ 609], 90.00th=[ 651], 95.00th=[ 709], 00:15:27.972 | 99.00th=[ 785], 99.50th=[ 802], 99.90th=[ 810], 99.95th=[ 810], 00:15:27.972 | 99.99th=[ 810] 00:15:27.972 bw ( KiB/s): min=16384, max=56320, per=4.26%, avg=31776.60, stdev=9544.55, samples=20 00:15:27.972 iops : min= 64, max= 220, avg=124.00, stdev=37.38, samples=20 00:15:27.972 lat (msec) : 50=0.08%, 100=0.38%, 250=8.81%, 500=35.71%, 750=53.03% 00:15:27.972 lat (msec) : 1000=1.99% 00:15:27.972 cpu : usr=0.03%, sys=0.71%, ctx=255, majf=0, minf=4097 00:15:27.972 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.2% 00:15:27.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.972 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:27.972 issued rwts: total=1305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:27.972 00:15:27.972 Run status group 0 (all jobs): 00:15:27.972 READ: bw=729MiB/s (764MB/s), 31.8MiB/s-236MiB/s (33.4MB/s-248MB/s), io=7401MiB (7760MB), run=10016-10157msec 00:15:27.972 00:15:27.972 Disk stats (read/write): 00:15:27.972 nvme0n1: ios=6119/0, merge=0/0, ticks=1221032/0, in_queue=1221032, util=97.76% 00:15:27.972 nvme10n1: ios=2505/0, merge=0/0, ticks=1217631/0, in_queue=1217631, util=97.99% 00:15:27.972 nvme1n1: ios=3726/0, merge=0/0, ticks=1222587/0, in_queue=1222587, util=98.10% 00:15:27.972 nvme2n1: ios=5164/0, merge=0/0, ticks=1199855/0, in_queue=1199855, util=98.17% 00:15:27.972 nvme3n1: ios=2457/0, merge=0/0, ticks=1214210/0, in_queue=1214210, util=98.23% 00:15:27.972 nvme4n1: ios=3790/0, merge=0/0, ticks=1222477/0, in_queue=1222477, util=98.41% 00:15:27.972 nvme5n1: ios=3023/0, merge=0/0, ticks=1224127/0, in_queue=1224127, util=98.48% 00:15:27.972 nvme6n1: ios=3618/0, merge=0/0, ticks=1220714/0, in_queue=1220714, util=98.62% 00:15:27.972 nvme7n1: ios=6142/0, merge=0/0, ticks=1224940/0, in_queue=1224940, util=98.87% 00:15:27.972 nvme8n1: ios=18805/0, merge=0/0, ticks=1239761/0, in_queue=1239761, util=99.03% 00:15:27.972 nvme9n1: ios=2483/0, merge=0/0, ticks=1215927/0, in_queue=1215927, util=99.13% 00:15:27.972 20:56:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:15:27.972 [global] 00:15:27.972 thread=1 00:15:27.972 invalidate=1 00:15:27.972 rw=randwrite 00:15:27.972 time_based=1 00:15:27.972 runtime=10 00:15:27.972 ioengine=libaio 00:15:27.972 direct=1 00:15:27.972 bs=262144 00:15:27.972 iodepth=64 00:15:27.972 norandommap=1 00:15:27.972 numjobs=1 00:15:27.972 00:15:27.972 [job0] 00:15:27.972 filename=/dev/nvme0n1 00:15:27.972 [job1] 00:15:27.972 filename=/dev/nvme10n1 00:15:27.972 [job2] 00:15:27.972 filename=/dev/nvme1n1 00:15:27.972 [job3] 00:15:27.972 filename=/dev/nvme2n1 00:15:27.972 [job4] 00:15:27.972 filename=/dev/nvme3n1 00:15:27.972 [job5] 00:15:27.972 filename=/dev/nvme4n1 00:15:27.972 [job6] 00:15:27.972 filename=/dev/nvme5n1 00:15:27.972 [job7] 00:15:27.972 filename=/dev/nvme6n1 00:15:27.972 [job8] 00:15:27.972 filename=/dev/nvme7n1 00:15:27.972 [job9] 00:15:27.972 filename=/dev/nvme8n1 00:15:27.972 [job10] 00:15:27.972 filename=/dev/nvme9n1 00:15:27.972 Could not set queue depth (nvme0n1) 00:15:27.972 Could not set queue depth (nvme10n1) 00:15:27.972 Could not set queue depth (nvme1n1) 00:15:27.972 Could not set queue depth (nvme2n1) 00:15:27.972 Could not set queue depth (nvme3n1) 00:15:27.972 Could not set queue depth (nvme4n1) 00:15:27.972 Could not set queue depth (nvme5n1) 00:15:27.972 Could not set queue depth (nvme6n1) 00:15:27.972 Could not set queue depth (nvme7n1) 00:15:27.972 Could not set queue depth (nvme8n1) 00:15:27.972 Could not set queue depth (nvme9n1) 00:15:27.972 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:27.973 fio-3.35 00:15:27.973 Starting 11 threads 00:15:37.952 00:15:37.952 job0: (groupid=0, jobs=1): err= 0: pid=84603: Sun Aug 11 20:56:47 2024 00:15:37.952 write: IOPS=156, BW=39.2MiB/s (41.1MB/s)(403MiB/10284msec); 0 zone resets 00:15:37.952 slat (usec): min=18, max=83544, avg=6200.43, stdev=11463.59 00:15:37.952 clat (msec): min=34, max=674, avg=401.51, stdev=67.07 00:15:37.952 lat (msec): min=34, max=674, avg=407.71, stdev=67.26 00:15:37.952 clat percentiles (msec): 00:15:37.952 | 1.00th=[ 102], 5.00th=[ 266], 10.00th=[ 359], 20.00th=[ 388], 00:15:37.952 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 414], 60.00th=[ 422], 00:15:37.952 | 70.00th=[ 426], 80.00th=[ 435], 90.00th=[ 439], 95.00th=[ 460], 00:15:37.952 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 676], 99.95th=[ 676], 00:15:37.952 | 99.99th=[ 676] 00:15:37.952 bw ( KiB/s): min=34816, max=55808, per=5.00%, avg=39654.40, stdev=4203.97, samples=20 00:15:37.952 iops : min= 136, max= 218, avg=154.90, stdev=16.42, samples=20 00:15:37.952 lat (msec) : 50=0.25%, 100=0.62%, 250=2.73%, 500=94.79%, 750=1.61% 00:15:37.952 cpu : usr=0.53%, sys=0.47%, ctx=1477, majf=0, minf=1 00:15:37.952 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:15:37.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.952 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.952 issued rwts: total=0,1613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.952 job1: (groupid=0, jobs=1): err= 0: pid=84604: Sun Aug 11 20:56:47 2024 00:15:37.952 write: IOPS=155, BW=38.9MiB/s (40.8MB/s)(400MiB/10272msec); 0 zone resets 00:15:37.952 slat (usec): min=22, max=96925, avg=6243.99, stdev=11711.94 00:15:37.952 clat (msec): min=57, max=664, avg=404.38, stdev=60.66 00:15:37.952 lat (msec): min=57, max=664, avg=410.62, stdev=60.64 00:15:37.952 clat percentiles (msec): 00:15:37.952 | 1.00th=[ 117], 5.00th=[ 313], 10.00th=[ 368], 20.00th=[ 388], 00:15:37.952 | 30.00th=[ 397], 40.00th=[ 409], 50.00th=[ 414], 60.00th=[ 418], 00:15:37.952 | 70.00th=[ 426], 80.00th=[ 435], 90.00th=[ 443], 95.00th=[ 456], 00:15:37.952 | 99.00th=[ 558], 99.50th=[ 617], 99.90th=[ 667], 99.95th=[ 667], 00:15:37.952 | 99.99th=[ 667] 00:15:37.952 bw ( KiB/s): min=36864, max=45056, per=4.96%, avg=39347.20, stdev=2056.57, samples=20 00:15:37.952 iops : min= 144, max= 176, avg=153.70, stdev= 8.03, samples=20 00:15:37.952 lat (msec) : 100=0.75%, 250=3.12%, 500=94.50%, 750=1.62% 00:15:37.952 cpu : usr=0.40%, sys=0.64%, ctx=1663, majf=0, minf=1 00:15:37.952 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:15:37.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.952 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.952 issued rwts: total=0,1600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.952 job2: (groupid=0, jobs=1): err= 0: pid=84616: Sun Aug 11 20:56:47 2024 00:15:37.952 write: IOPS=267, BW=66.9MiB/s (70.2MB/s)(681MiB/10172msec); 0 zone resets 00:15:37.952 slat (usec): min=17, max=164026, avg=3667.11, stdev=6989.71 00:15:37.952 clat (msec): min=165, max=390, avg=235.21, stdev=18.70 00:15:37.952 lat (msec): min=166, max=390, avg=238.88, stdev=17.71 00:15:37.952 clat percentiles (msec): 00:15:37.952 | 1.00th=[ 207], 5.00th=[ 218], 10.00th=[ 222], 20.00th=[ 224], 00:15:37.952 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 236], 00:15:37.952 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 255], 00:15:37.952 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 376], 99.95th=[ 393], 00:15:37.952 | 99.99th=[ 393] 00:15:37.952 bw ( KiB/s): min=45056, max=72192, per=8.58%, avg=68121.60, stdev=5612.31, samples=20 00:15:37.952 iops : min= 176, max= 282, avg=266.10, stdev=21.92, samples=20 00:15:37.952 lat (msec) : 250=94.75%, 500=5.25% 00:15:37.952 cpu : usr=0.53%, sys=0.84%, ctx=3334, majf=0, minf=1 00:15:37.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:15:37.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.952 issued rwts: total=0,2724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.952 job3: (groupid=0, jobs=1): err= 0: pid=84617: Sun Aug 11 20:56:47 2024 00:15:37.952 write: IOPS=302, BW=75.7MiB/s (79.3MB/s)(770MiB/10175msec); 0 zone resets 00:15:37.952 slat (usec): min=15, max=81842, avg=3231.77, stdev=6054.77 00:15:37.952 clat (msec): min=9, max=390, avg=208.17, stdev=62.90 00:15:37.952 lat (msec): min=9, max=390, avg=211.40, stdev=63.60 00:15:37.952 clat percentiles (msec): 00:15:37.952 | 1.00th=[ 51], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 220], 00:15:37.952 | 30.00th=[ 224], 40.00th=[ 228], 50.00th=[ 234], 60.00th=[ 236], 00:15:37.952 | 70.00th=[ 239], 80.00th=[ 239], 90.00th=[ 241], 95.00th=[ 243], 00:15:37.952 | 99.00th=[ 305], 99.50th=[ 334], 99.90th=[ 376], 99.95th=[ 393], 00:15:37.952 | 99.99th=[ 393] 00:15:37.952 bw ( KiB/s): min=60928, max=232960, per=9.73%, avg=77184.00, stdev=36740.24, samples=20 00:15:37.952 iops : min= 238, max= 910, avg=301.50, stdev=143.52, samples=20 00:15:37.952 lat (msec) : 10=0.13%, 20=0.16%, 50=0.81%, 100=13.67%, 250=83.21% 00:15:37.952 lat (msec) : 500=2.01% 00:15:37.952 cpu : usr=0.48%, sys=0.90%, ctx=4021, majf=0, minf=1 00:15:37.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:15:37.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.952 issued rwts: total=0,3079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.952 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.952 job4: (groupid=0, jobs=1): err= 0: pid=84618: Sun Aug 11 20:56:47 2024 00:15:37.952 write: IOPS=268, BW=67.2MiB/s (70.5MB/s)(684MiB/10179msec); 0 zone resets 00:15:37.952 slat (usec): min=28, max=86108, avg=3649.98, stdev=6519.22 00:15:37.952 clat (msec): min=12, max=393, avg=234.25, stdev=29.92 00:15:37.952 lat (msec): min=12, max=393, avg=237.90, stdev=29.71 00:15:37.952 clat percentiles (msec): 00:15:37.952 | 1.00th=[ 92], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 224], 00:15:37.952 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 236], 00:15:37.953 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 288], 00:15:37.953 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 380], 99.95th=[ 393], 00:15:37.953 | 99.99th=[ 393] 00:15:37.953 bw ( KiB/s): min=59392, max=73728, per=8.62%, avg=68434.75, stdev=3409.19, samples=20 00:15:37.953 iops : min= 232, max= 288, avg=267.30, stdev=13.38, samples=20 00:15:37.953 lat (msec) : 20=0.15%, 50=0.44%, 100=0.44%, 250=92.44%, 500=6.54% 00:15:37.953 cpu : usr=0.79%, sys=1.01%, ctx=3214, majf=0, minf=2 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,2737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 job5: (groupid=0, jobs=1): err= 0: pid=84619: Sun Aug 11 20:56:47 2024 00:15:37.953 write: IOPS=168, BW=42.1MiB/s (44.2MB/s)(433MiB/10278msec); 0 zone resets 00:15:37.953 slat (usec): min=15, max=52854, avg=5569.42, stdev=10292.39 00:15:37.953 clat (msec): min=58, max=651, avg=373.92, stdev=69.03 00:15:37.953 lat (msec): min=58, max=651, avg=379.49, stdev=69.78 00:15:37.953 clat percentiles (msec): 00:15:37.953 | 1.00th=[ 113], 5.00th=[ 220], 10.00th=[ 271], 20.00th=[ 363], 00:15:37.953 | 30.00th=[ 376], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 397], 00:15:37.953 | 70.00th=[ 405], 80.00th=[ 409], 90.00th=[ 418], 95.00th=[ 439], 00:15:37.953 | 99.00th=[ 550], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:15:37.953 | 99.99th=[ 651] 00:15:37.953 bw ( KiB/s): min=36864, max=62589, per=5.38%, avg=42707.05, stdev=5777.96, samples=20 00:15:37.953 iops : min= 144, max= 244, avg=166.80, stdev=22.48, samples=20 00:15:37.953 lat (msec) : 100=0.92%, 250=7.16%, 500=90.65%, 750=1.27% 00:15:37.953 cpu : usr=0.37%, sys=0.58%, ctx=1971, majf=0, minf=1 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,1732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 job6: (groupid=0, jobs=1): err= 0: pid=84620: Sun Aug 11 20:56:47 2024 00:15:37.953 write: IOPS=153, BW=38.4MiB/s (40.3MB/s)(395MiB/10286msec); 0 zone resets 00:15:37.953 slat (usec): min=21, max=93986, avg=6174.82, stdev=11769.59 00:15:37.953 clat (msec): min=30, max=662, avg=409.98, stdev=67.37 00:15:37.953 lat (msec): min=30, max=662, avg=416.16, stdev=67.63 00:15:37.953 clat percentiles (msec): 00:15:37.953 | 1.00th=[ 97], 5.00th=[ 292], 10.00th=[ 359], 20.00th=[ 388], 00:15:37.953 | 30.00th=[ 409], 40.00th=[ 414], 50.00th=[ 422], 60.00th=[ 430], 00:15:37.953 | 70.00th=[ 439], 80.00th=[ 447], 90.00th=[ 456], 95.00th=[ 464], 00:15:37.953 | 99.00th=[ 558], 99.50th=[ 609], 99.90th=[ 659], 99.95th=[ 659], 00:15:37.953 | 99.99th=[ 659] 00:15:37.953 bw ( KiB/s): min=36864, max=52224, per=4.89%, avg=38835.20, stdev=3493.45, samples=20 00:15:37.953 iops : min= 144, max= 204, avg=151.70, stdev=13.65, samples=20 00:15:37.953 lat (msec) : 50=0.25%, 100=0.76%, 250=2.91%, 500=94.43%, 750=1.64% 00:15:37.953 cpu : usr=0.39%, sys=0.53%, ctx=798, majf=0, minf=1 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,1581,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 job7: (groupid=0, jobs=1): err= 0: pid=84621: Sun Aug 11 20:56:47 2024 00:15:37.953 write: IOPS=266, BW=66.6MiB/s (69.8MB/s)(678MiB/10171msec); 0 zone resets 00:15:37.953 slat (usec): min=21, max=213302, avg=3609.04, stdev=7395.88 00:15:37.953 clat (msec): min=160, max=420, avg=236.48, stdev=21.83 00:15:37.953 lat (msec): min=175, max=420, avg=240.09, stdev=20.97 00:15:37.953 clat percentiles (msec): 00:15:37.953 | 1.00th=[ 209], 5.00th=[ 220], 10.00th=[ 222], 20.00th=[ 224], 00:15:37.953 | 30.00th=[ 230], 40.00th=[ 234], 50.00th=[ 236], 60.00th=[ 236], 00:15:37.953 | 70.00th=[ 239], 80.00th=[ 241], 90.00th=[ 243], 95.00th=[ 264], 00:15:37.953 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:15:37.953 | 99.99th=[ 422] 00:15:37.953 bw ( KiB/s): min=38476, max=73728, per=8.54%, avg=67741.40, stdev=7056.66, samples=20 00:15:37.953 iops : min= 150, max= 288, avg=264.60, stdev=27.63, samples=20 00:15:37.953 lat (msec) : 250=94.13%, 500=5.87% 00:15:37.953 cpu : usr=0.53%, sys=0.71%, ctx=4256, majf=0, minf=1 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,2710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 job8: (groupid=0, jobs=1): err= 0: pid=84622: Sun Aug 11 20:56:47 2024 00:15:37.953 write: IOPS=1097, BW=274MiB/s (288MB/s)(2758MiB/10054msec); 0 zone resets 00:15:37.953 slat (usec): min=17, max=7875, avg=901.19, stdev=1508.86 00:15:37.953 clat (msec): min=8, max=112, avg=57.41, stdev= 4.61 00:15:37.953 lat (msec): min=8, max=112, avg=58.31, stdev= 4.53 00:15:37.953 clat percentiles (msec): 00:15:37.953 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:15:37.953 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:15:37.953 | 70.00th=[ 59], 80.00th=[ 59], 90.00th=[ 60], 95.00th=[ 62], 00:15:37.953 | 99.00th=[ 81], 99.50th=[ 90], 99.90th=[ 102], 99.95th=[ 109], 00:15:37.953 | 99.99th=[ 109] 00:15:37.953 bw ( KiB/s): min=244713, max=291840, per=35.39%, avg=280830.85, stdev=10060.12, samples=20 00:15:37.953 iops : min= 955, max= 1140, avg=1096.95, stdev=39.47, samples=20 00:15:37.953 lat (msec) : 10=0.05%, 20=0.04%, 50=0.22%, 100=99.57%, 250=0.13% 00:15:37.953 cpu : usr=1.72%, sys=2.82%, ctx=13279, majf=0, minf=1 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,11032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 job9: (groupid=0, jobs=1): err= 0: pid=84623: Sun Aug 11 20:56:47 2024 00:15:37.953 write: IOPS=148, BW=37.0MiB/s (38.8MB/s)(381MiB/10290msec); 0 zone resets 00:15:37.953 slat (usec): min=20, max=224163, avg=6571.09, stdev=14007.60 00:15:37.953 clat (msec): min=34, max=753, avg=425.29, stdev=93.73 00:15:37.953 lat (msec): min=34, max=753, avg=431.86, stdev=94.29 00:15:37.953 clat percentiles (msec): 00:15:37.953 | 1.00th=[ 101], 5.00th=[ 243], 10.00th=[ 368], 20.00th=[ 393], 00:15:37.953 | 30.00th=[ 405], 40.00th=[ 414], 50.00th=[ 418], 60.00th=[ 426], 00:15:37.953 | 70.00th=[ 439], 80.00th=[ 451], 90.00th=[ 542], 95.00th=[ 600], 00:15:37.953 | 99.00th=[ 667], 99.50th=[ 709], 99.90th=[ 751], 99.95th=[ 751], 00:15:37.953 | 99.99th=[ 751] 00:15:37.953 bw ( KiB/s): min=22528, max=57344, per=4.71%, avg=37376.00, stdev=6640.44, samples=20 00:15:37.953 iops : min= 88, max= 224, avg=146.00, stdev=25.94, samples=20 00:15:37.953 lat (msec) : 50=0.26%, 100=0.52%, 250=5.25%, 500=76.77%, 750=16.99% 00:15:37.953 lat (msec) : 1000=0.20% 00:15:37.953 cpu : usr=0.29%, sys=0.52%, ctx=1884, majf=0, minf=1 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,1524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 job10: (groupid=0, jobs=1): err= 0: pid=84624: Sun Aug 11 20:56:47 2024 00:15:37.953 write: IOPS=152, BW=38.1MiB/s (39.9MB/s)(391MiB/10275msec); 0 zone resets 00:15:37.953 slat (usec): min=23, max=127431, avg=6402.51, stdev=12220.81 00:15:37.953 clat (msec): min=130, max=651, avg=413.82, stdev=52.57 00:15:37.953 lat (msec): min=130, max=651, avg=420.23, stdev=52.11 00:15:37.953 clat percentiles (msec): 00:15:37.953 | 1.00th=[ 174], 5.00th=[ 338], 10.00th=[ 376], 20.00th=[ 397], 00:15:37.953 | 30.00th=[ 405], 40.00th=[ 414], 50.00th=[ 422], 60.00th=[ 430], 00:15:37.953 | 70.00th=[ 435], 80.00th=[ 443], 90.00th=[ 451], 95.00th=[ 460], 00:15:37.953 | 99.00th=[ 542], 99.50th=[ 600], 99.90th=[ 651], 99.95th=[ 651], 00:15:37.953 | 99.99th=[ 651] 00:15:37.953 bw ( KiB/s): min=36864, max=40960, per=4.84%, avg=38403.65, stdev=1463.15, samples=20 00:15:37.953 iops : min= 144, max= 160, avg=150.00, stdev= 5.73, samples=20 00:15:37.953 lat (msec) : 250=2.62%, 500=95.97%, 750=1.41% 00:15:37.953 cpu : usr=0.36%, sys=0.50%, ctx=890, majf=0, minf=1 00:15:37.953 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:15:37.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.953 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:37.953 issued rwts: total=0,1564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.953 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:37.953 00:15:37.953 Run status group 0 (all jobs): 00:15:37.953 WRITE: bw=775MiB/s (813MB/s), 37.0MiB/s-274MiB/s (38.8MB/s-288MB/s), io=7974MiB (8361MB), run=10054-10290msec 00:15:37.953 00:15:37.953 Disk stats (read/write): 00:15:37.953 nvme0n1: ios=50/3201, merge=0/0, ticks=49/1235490, in_queue=1235539, util=97.94% 00:15:37.953 nvme10n1: ios=49/3174, merge=0/0, ticks=42/1233939, in_queue=1233981, util=98.02% 00:15:37.953 nvme1n1: ios=36/5310, merge=0/0, ticks=21/1208581, in_queue=1208602, util=97.89% 00:15:37.953 nvme2n1: ios=20/6020, merge=0/0, ticks=29/1208477, in_queue=1208506, util=97.96% 00:15:37.953 nvme3n1: ios=25/5340, merge=0/0, ticks=45/1208556, in_queue=1208601, util=98.18% 00:15:37.953 nvme4n1: ios=0/3432, merge=0/0, ticks=0/1236282, in_queue=1236282, util=98.24% 00:15:37.953 nvme5n1: ios=0/3130, merge=0/0, ticks=0/1234398, in_queue=1234398, util=98.44% 00:15:37.953 nvme6n1: ios=0/5280, merge=0/0, ticks=0/1208829, in_queue=1208829, util=98.36% 00:15:37.953 nvme7n1: ios=0/21894, merge=0/0, ticks=0/1215203, in_queue=1215203, util=98.66% 00:15:37.953 nvme8n1: ios=0/3019, merge=0/0, ticks=0/1231638, in_queue=1231638, util=98.91% 00:15:37.953 nvme9n1: ios=0/3093, merge=0/0, ticks=0/1232563, in_queue=1232563, util=98.87% 00:15:37.953 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:15:37.953 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:15:37.953 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.953 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:15:37.954 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.954 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:15:37.955 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:15:37.955 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:37.955 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:15:38.214 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # nvmfcleanup 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.214 rmmod nvme_tcp 00:15:38.214 rmmod nvme_fabrics 00:15:38.214 rmmod nvme_keyring 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # '[' -n 83944 ']' 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # killprocess 83944 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 83944 ']' 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 83944 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83944 00:15:38.214 killing process with pid 83944 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83944' 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 83944 00:15:38.214 20:56:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 83944 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # iptr 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@783 -- # iptables-save 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@783 -- # iptables-restore 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # remove_spdk_ns 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.782 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # return 0 00:15:39.041 ************************************ 00:15:39.041 END TEST nvmf_multiconnection 00:15:39.041 ************************************ 00:15:39.041 00:15:39.041 real 0m49.060s 00:15:39.041 user 2m49.601s 00:15:39.041 sys 0m24.676s 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.041 ************************************ 00:15:39.041 START TEST nvmf_initiator_timeout 00:15:39.041 ************************************ 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:39.041 * Looking for test storage... 00:15:39.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:15:39.041 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # prepare_net_devs 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # local -g is_hw=no 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # remove_spdk_ns 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@452 -- # nvmf_veth_init 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:15:39.042 Cannot find device "nvmf_init_br" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:15:39.042 Cannot find device "nvmf_init_br2" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:15:39.042 Cannot find device "nvmf_tgt_br" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.042 Cannot find device "nvmf_tgt_br2" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:15:39.042 Cannot find device "nvmf_init_br" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:15:39.042 Cannot find device "nvmf_init_br2" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:15:39.042 Cannot find device "nvmf_tgt_br" 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:15:39.042 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:15:39.301 Cannot find device "nvmf_tgt_br2" 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:15:39.301 Cannot find device "nvmf_br" 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:15:39.301 Cannot find device "nvmf_init_if" 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:15:39.301 Cannot find device "nvmf_init_if2" 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:39.301 20:56:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.301 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:15:39.302 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:15:39.302 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.302 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:39.560 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:15:39.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:15:39.561 00:15:39.561 --- 10.0.0.3 ping statistics --- 00:15:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.561 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:15:39.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:39.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:39.561 00:15:39.561 --- 10.0.0.4 ping statistics --- 00:15:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.561 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:39.561 00:15:39.561 --- 10.0.0.1 ping statistics --- 00:15:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.561 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:39.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:39.561 00:15:39.561 --- 10.0.0.2 ping statistics --- 00:15:39.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.561 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@453 -- # return 0 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # nvmfpid=85037 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # waitforlisten 85037 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 85037 ']' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:39.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:39.561 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.561 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:15:39.561 [2024-08-11 20:56:50.229547] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:15:39.561 [2024-08-11 20:56:50.229655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.820 [2024-08-11 20:56:50.367942] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.820 [2024-08-11 20:56:50.424841] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.820 [2024-08-11 20:56:50.424910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.820 [2024-08-11 20:56:50.424920] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.820 [2024-08-11 20:56:50.424927] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.820 [2024-08-11 20:56:50.424934] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.820 [2024-08-11 20:56:50.425120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.820 [2024-08-11 20:56:50.425260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.820 [2024-08-11 20:56:50.425391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.820 [2024-08-11 20:56:50.425393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.820 [2024-08-11 20:56:50.476290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:39.820 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:40.078 Malloc0 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 Delay0 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 [2024-08-11 20:56:50.624198] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 [2024-08-11 20:56:50.652376] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:40.079 20:56:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=85088 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:15:42.612 20:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:15:42.612 [global] 00:15:42.612 thread=1 00:15:42.612 invalidate=1 00:15:42.612 rw=write 00:15:42.612 time_based=1 00:15:42.612 runtime=60 00:15:42.612 ioengine=libaio 00:15:42.612 direct=1 00:15:42.612 bs=4096 00:15:42.612 iodepth=1 00:15:42.612 norandommap=0 00:15:42.612 numjobs=1 00:15:42.612 00:15:42.612 verify_dump=1 00:15:42.612 verify_backlog=512 00:15:42.612 verify_state_save=0 00:15:42.612 do_verify=1 00:15:42.612 verify=crc32c-intel 00:15:42.612 [job0] 00:15:42.612 filename=/dev/nvme0n1 00:15:42.612 Could not set queue depth (nvme0n1) 00:15:42.612 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:42.612 fio-3.35 00:15:42.613 Starting 1 thread 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:45.145 true 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:45.145 true 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:45.145 true 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:45.145 true 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:45.145 20:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.433 true 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.433 true 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.433 true 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.433 true 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:15:48.433 20:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 85088 00:16:44.670 00:16:44.670 job0: (groupid=0, jobs=1): err= 0: pid=85119: Sun Aug 11 20:57:53 2024 00:16:44.670 read: IOPS=799, BW=3196KiB/s (3273kB/s)(187MiB/60001msec) 00:16:44.670 slat (usec): min=9, max=11421, avg=12.61, stdev=62.93 00:16:44.670 clat (usec): min=72, max=40544k, avg=1059.61, stdev=185155.04 00:16:44.670 lat (usec): min=166, max=40544k, avg=1072.23, stdev=185155.06 00:16:44.670 clat percentiles (usec): 00:16:44.670 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:16:44.670 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:16:44.670 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:16:44.670 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 412], 99.95th=[ 486], 00:16:44.670 | 99.99th=[ 750] 00:16:44.670 write: IOPS=802, BW=3208KiB/s (3285kB/s)(188MiB/60001msec); 0 zone resets 00:16:44.670 slat (usec): min=11, max=641, avg=17.66, stdev= 6.00 00:16:44.670 clat (usec): min=115, max=1510, avg=158.12, stdev=25.22 00:16:44.670 lat (usec): min=130, max=1527, avg=175.77, stdev=26.39 00:16:44.670 clat percentiles (usec): 00:16:44.670 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 133], 20.00th=[ 139], 00:16:44.670 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 161], 00:16:44.670 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 202], 00:16:44.670 | 99.00th=[ 233], 99.50th=[ 243], 99.90th=[ 285], 99.95th=[ 338], 00:16:44.670 | 99.99th=[ 553] 00:16:44.670 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=9705.64, stdev=1748.80, samples=39 00:16:44.670 iops : min= 1024, max= 3072, avg=2426.41, stdev=437.20, samples=39 00:16:44.670 lat (usec) : 100=0.01%, 250=94.56%, 500=5.41%, 750=0.02%, 1000=0.01% 00:16:44.670 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:16:44.670 cpu : usr=0.49%, sys=1.88%, ctx=96087, majf=0, minf=5 00:16:44.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.670 issued rwts: total=47948,48128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.670 00:16:44.670 Run status group 0 (all jobs): 00:16:44.670 READ: bw=3196KiB/s (3273kB/s), 3196KiB/s-3196KiB/s (3273kB/s-3273kB/s), io=187MiB (196MB), run=60001-60001msec 00:16:44.670 WRITE: bw=3208KiB/s (3285kB/s), 3208KiB/s-3208KiB/s (3285kB/s-3285kB/s), io=188MiB (197MB), run=60001-60001msec 00:16:44.670 00:16:44.670 Disk stats (read/write): 00:16:44.670 nvme0n1: ios=47851/48108, merge=0/0, ticks=10481/7945, in_queue=18426, util=99.70% 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:16:44.670 nvmf hotplug test: fio successful as expected 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # nvmfcleanup 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.670 rmmod nvme_tcp 00:16:44.670 rmmod nvme_fabrics 00:16:44.670 rmmod nvme_keyring 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # '[' -n 85037 ']' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # killprocess 85037 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 85037 ']' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 85037 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85037 00:16:44.670 killing process with pid 85037 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85037' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 85037 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 85037 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # iptr 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@783 -- # iptables-save 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@783 -- # iptables-restore 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:16:44.670 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # remove_spdk_ns 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # return 0 00:16:44.671 00:16:44.671 real 1m4.280s 00:16:44.671 user 3m57.068s 00:16:44.671 sys 0m15.470s 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.671 ************************************ 00:16:44.671 END TEST nvmf_initiator_timeout 00:16:44.671 ************************************ 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:44.671 ************************************ 00:16:44.671 END TEST nvmf_target_extra 00:16:44.671 ************************************ 00:16:44.671 00:16:44.671 real 6m18.706s 00:16:44.671 user 15m52.632s 00:16:44.671 sys 1m43.552s 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:44.671 20:57:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.671 20:57:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:44.671 20:57:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:44.671 20:57:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.671 20:57:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.671 ************************************ 00:16:44.671 START TEST nvmf_host 00:16:44.671 ************************************ 00:16:44.671 20:57:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:44.671 * Looking for test storage... 00:16:44.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.671 ************************************ 00:16:44.671 START TEST nvmf_identify 00:16:44.671 ************************************ 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:44.671 * Looking for test storage... 00:16:44.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.671 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # prepare_net_devs 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # local -g is_hw=no 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # remove_spdk_ns 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # nvmf_veth_init 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:16:44.672 Cannot find device "nvmf_init_br" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:16:44.672 Cannot find device "nvmf_init_br2" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:16:44.672 Cannot find device "nvmf_tgt_br" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.672 Cannot find device "nvmf_tgt_br2" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:16:44.672 Cannot find device "nvmf_init_br" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:16:44.672 Cannot find device "nvmf_init_br2" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:16:44.672 Cannot find device "nvmf_tgt_br" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:16:44.672 Cannot find device "nvmf_tgt_br2" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:16:44.672 Cannot find device "nvmf_br" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:16:44.672 Cannot find device "nvmf_init_if" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:16:44.672 Cannot find device "nvmf_init_if2" 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.672 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:16:44.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:16:44.673 00:16:44.673 --- 10.0.0.3 ping statistics --- 00:16:44.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.673 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:16:44.673 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:44.673 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:16:44.673 00:16:44.673 --- 10.0.0.4 ping statistics --- 00:16:44.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.673 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:44.673 00:16:44.673 --- 10.0.0.1 ping statistics --- 00:16:44.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.673 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:44.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:44.673 00:16:44.673 --- 10.0.0.2 ping statistics --- 00:16:44.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.673 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@453 -- # return 0 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86027 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86027 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 86027 ']' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:44.673 20:57:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:44.673 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:44.673 [2024-08-11 20:57:54.768705] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:44.673 [2024-08-11 20:57:54.768938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.673 [2024-08-11 20:57:54.912574] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.673 [2024-08-11 20:57:55.024908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.673 [2024-08-11 20:57:55.025233] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.673 [2024-08-11 20:57:55.025402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.673 [2024-08-11 20:57:55.025681] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.673 [2024-08-11 20:57:55.025697] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.673 [2024-08-11 20:57:55.025821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.673 [2024-08-11 20:57:55.025959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.673 [2024-08-11 20:57:55.026050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.673 [2024-08-11 20:57:55.026416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.673 [2024-08-11 20:57:55.110866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 [2024-08-11 20:57:55.843694] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 Malloc0 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 [2024-08-11 20:57:55.962572] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.241 [ 00:16:45.241 { 00:16:45.241 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:45.241 "subtype": "Discovery", 00:16:45.241 "listen_addresses": [ 00:16:45.241 { 00:16:45.241 "trtype": "TCP", 00:16:45.241 "adrfam": "IPv4", 00:16:45.241 "traddr": "10.0.0.3", 00:16:45.241 "trsvcid": "4420" 00:16:45.241 } 00:16:45.241 ], 00:16:45.241 "allow_any_host": true, 00:16:45.241 "hosts": [] 00:16:45.241 }, 00:16:45.241 { 00:16:45.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.241 "subtype": "NVMe", 00:16:45.241 "listen_addresses": [ 00:16:45.241 { 00:16:45.241 "trtype": "TCP", 00:16:45.241 "adrfam": "IPv4", 00:16:45.241 "traddr": "10.0.0.3", 00:16:45.241 "trsvcid": "4420" 00:16:45.241 } 00:16:45.241 ], 00:16:45.241 "allow_any_host": true, 00:16:45.241 "hosts": [], 00:16:45.241 "serial_number": "SPDK00000000000001", 00:16:45.241 "model_number": "SPDK bdev Controller", 00:16:45.241 "max_namespaces": 32, 00:16:45.241 "min_cntlid": 1, 00:16:45.241 "max_cntlid": 65519, 00:16:45.241 "namespaces": [ 00:16:45.241 { 00:16:45.241 "nsid": 1, 00:16:45.241 "bdev_name": "Malloc0", 00:16:45.241 "name": "Malloc0", 00:16:45.241 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:45.241 "eui64": "ABCDEF0123456789", 00:16:45.241 "uuid": "58f5dd42-e7ce-4bb4-aabe-f9fdaad98086" 00:16:45.241 } 00:16:45.241 ] 00:16:45.241 } 00:16:45.241 ] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.241 20:57:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:45.242 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:45.242 [2024-08-11 20:57:56.018570] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:45.242 [2024-08-11 20:57:56.018638] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86062 ] 00:16:45.503 [2024-08-11 20:57:56.151546] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:45.503 [2024-08-11 20:57:56.151648] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:45.503 [2024-08-11 20:57:56.151658] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:45.503 [2024-08-11 20:57:56.151669] nvme_tcp.c:2363:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:45.503 [2024-08-11 20:57:56.151679] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:45.503 [2024-08-11 20:57:56.151848] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:45.503 [2024-08-11 20:57:56.151903] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd49930 0 00:16:45.503 [2024-08-11 20:57:56.166645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:45.503 [2024-08-11 20:57:56.166687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:45.503 [2024-08-11 20:57:56.166701] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:45.503 [2024-08-11 20:57:56.166706] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:45.503 [2024-08-11 20:57:56.166754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.503 [2024-08-11 20:57:56.166763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.503 [2024-08-11 20:57:56.166767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.503 [2024-08-11 20:57:56.166780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:45.503 [2024-08-11 20:57:56.166815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.503 [2024-08-11 20:57:56.174655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.503 [2024-08-11 20:57:56.174696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.503 [2024-08-11 20:57:56.174703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.503 [2024-08-11 20:57:56.174708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.503 [2024-08-11 20:57:56.174722] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:45.503 [2024-08-11 20:57:56.174731] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:45.503 [2024-08-11 20:57:56.174737] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:45.504 [2024-08-11 20:57:56.174764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.174771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.174775] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.174785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.174816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.174881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.174889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.174893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.174897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.174904] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:45.504 [2024-08-11 20:57:56.174911] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:45.504 [2024-08-11 20:57:56.174936] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.174956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.174961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.174970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.174994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.175035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.175043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.175047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.175058] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:45.504 [2024-08-11 20:57:56.175068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:45.504 [2024-08-11 20:57:56.175076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175085] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.175093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.175115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.175160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.175167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.175172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175176] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.175182] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:45.504 [2024-08-11 20:57:56.175194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.175211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.175234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.175275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.175283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.175287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.175296] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:45.504 [2024-08-11 20:57:56.175302] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:45.504 [2024-08-11 20:57:56.175310] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:45.504 [2024-08-11 20:57:56.175416] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:45.504 [2024-08-11 20:57:56.175422] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:45.504 [2024-08-11 20:57:56.175433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.175450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.175473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.175520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.175528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.175532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.175543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:45.504 [2024-08-11 20:57:56.175554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.175571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.175593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.175658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.175668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.175672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.175682] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:45.504 [2024-08-11 20:57:56.175688] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:45.504 [2024-08-11 20:57:56.175696] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:45.504 [2024-08-11 20:57:56.175708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:45.504 [2024-08-11 20:57:56.175719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175724] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.175732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.504 [2024-08-11 20:57:56.175758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.175846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.504 [2024-08-11 20:57:56.175854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.504 [2024-08-11 20:57:56.175859] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175863] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49930): datao=0, datal=4096, cccid=0 00:16:45.504 [2024-08-11 20:57:56.175868] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82600) on tqpair(0xd49930): expected_datao=0, payload_size=4096 00:16:45.504 [2024-08-11 20:57:56.175873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175881] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175886] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.175903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.175908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.504 [2024-08-11 20:57:56.175921] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:45.504 [2024-08-11 20:57:56.175926] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:45.504 [2024-08-11 20:57:56.175937] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:45.504 [2024-08-11 20:57:56.175944] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:45.504 [2024-08-11 20:57:56.175949] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:45.504 [2024-08-11 20:57:56.175954] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:45.504 [2024-08-11 20:57:56.175964] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:45.504 [2024-08-11 20:57:56.175973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.504 [2024-08-11 20:57:56.175982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.504 [2024-08-11 20:57:56.175990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.504 [2024-08-11 20:57:56.176015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.504 [2024-08-11 20:57:56.176064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.504 [2024-08-11 20:57:56.176072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.504 [2024-08-11 20:57:56.176076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.505 [2024-08-11 20:57:56.176089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176094] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.505 [2024-08-11 20:57:56.176112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.505 [2024-08-11 20:57:56.176133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.505 [2024-08-11 20:57:56.176154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.505 [2024-08-11 20:57:56.176174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:45.505 [2024-08-11 20:57:56.176189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:45.505 [2024-08-11 20:57:56.176199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.505 [2024-08-11 20:57:56.176236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82600, cid 0, qid 0 00:16:45.505 [2024-08-11 20:57:56.176245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82780, cid 1, qid 0 00:16:45.505 [2024-08-11 20:57:56.176250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82900, cid 2, qid 0 00:16:45.505 [2024-08-11 20:57:56.176255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.505 [2024-08-11 20:57:56.176261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82c00, cid 4, qid 0 00:16:45.505 [2024-08-11 20:57:56.176330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.505 [2024-08-11 20:57:56.176338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.505 [2024-08-11 20:57:56.176342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82c00) on tqpair=0xd49930 00:16:45.505 [2024-08-11 20:57:56.176353] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:45.505 [2024-08-11 20:57:56.176359] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:45.505 [2024-08-11 20:57:56.176372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.505 [2024-08-11 20:57:56.176408] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82c00, cid 4, qid 0 00:16:45.505 [2024-08-11 20:57:56.176469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.505 [2024-08-11 20:57:56.176476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.505 [2024-08-11 20:57:56.176481] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176485] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49930): datao=0, datal=4096, cccid=4 00:16:45.505 [2024-08-11 20:57:56.176489] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82c00) on tqpair(0xd49930): expected_datao=0, payload_size=4096 00:16:45.505 [2024-08-11 20:57:56.176494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176501] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176506] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.505 [2024-08-11 20:57:56.176522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.505 [2024-08-11 20:57:56.176527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82c00) on tqpair=0xd49930 00:16:45.505 [2024-08-11 20:57:56.176546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:45.505 [2024-08-11 20:57:56.176573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.505 [2024-08-11 20:57:56.176612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.505 [2024-08-11 20:57:56.176662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82c00, cid 4, qid 0 00:16:45.505 [2024-08-11 20:57:56.176671] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82d80, cid 5, qid 0 00:16:45.505 [2024-08-11 20:57:56.176775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.505 [2024-08-11 20:57:56.176784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.505 [2024-08-11 20:57:56.176789] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176792] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49930): datao=0, datal=1024, cccid=4 00:16:45.505 [2024-08-11 20:57:56.176797] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82c00) on tqpair(0xd49930): expected_datao=0, payload_size=1024 00:16:45.505 [2024-08-11 20:57:56.176802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176809] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176813] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.505 [2024-08-11 20:57:56.176826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.505 [2024-08-11 20:57:56.176830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82d80) on tqpair=0xd49930 00:16:45.505 [2024-08-11 20:57:56.176857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.505 [2024-08-11 20:57:56.176866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.505 [2024-08-11 20:57:56.176870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82c00) on tqpair=0xd49930 00:16:45.505 [2024-08-11 20:57:56.176888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.176893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.176901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.505 [2024-08-11 20:57:56.176930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82c00, cid 4, qid 0 00:16:45.505 [2024-08-11 20:57:56.176992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.505 [2024-08-11 20:57:56.177001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.505 [2024-08-11 20:57:56.177005] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177009] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49930): datao=0, datal=3072, cccid=4 00:16:45.505 [2024-08-11 20:57:56.177013] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82c00) on tqpair(0xd49930): expected_datao=0, payload_size=3072 00:16:45.505 [2024-08-11 20:57:56.177018] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177025] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177030] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.505 [2024-08-11 20:57:56.177046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.505 [2024-08-11 20:57:56.177050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177055] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82c00) on tqpair=0xd49930 00:16:45.505 [2024-08-11 20:57:56.177066] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd49930) 00:16:45.505 [2024-08-11 20:57:56.177079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.505 [2024-08-11 20:57:56.177108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82c00, cid 4, qid 0 00:16:45.505 [2024-08-11 20:57:56.177163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.505 [2024-08-11 20:57:56.177171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.505 [2024-08-11 20:57:56.177175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177179] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd49930): datao=0, datal=8, cccid=4 00:16:45.505 [2024-08-11 20:57:56.177184] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82c00) on tqpair(0xd49930): expected_datao=0, payload_size=8 00:16:45.505 [2024-08-11 20:57:56.177188] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177195] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177200] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.505 [2024-08-11 20:57:56.177228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.505 [2024-08-11 20:57:56.177232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.505 [2024-08-11 20:57:56.177237] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82c00) on tqpair=0xd49930 00:16:45.506 ===================================================== 00:16:45.506 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:45.506 ===================================================== 00:16:45.506 Controller Capabilities/Features 00:16:45.506 ================================ 00:16:45.506 Vendor ID: 0000 00:16:45.506 Subsystem Vendor ID: 0000 00:16:45.506 Serial Number: .................... 00:16:45.506 Model Number: ........................................ 00:16:45.506 Firmware Version: 24.09 00:16:45.506 Recommended Arb Burst: 0 00:16:45.506 IEEE OUI Identifier: 00 00 00 00:16:45.506 Multi-path I/O 00:16:45.506 May have multiple subsystem ports: No 00:16:45.506 May have multiple controllers: No 00:16:45.506 Associated with SR-IOV VF: No 00:16:45.506 Max Data Transfer Size: 131072 00:16:45.506 Max Number of Namespaces: 0 00:16:45.506 Max Number of I/O Queues: 1024 00:16:45.506 NVMe Specification Version (VS): 1.3 00:16:45.506 NVMe Specification Version (Identify): 1.3 00:16:45.506 Maximum Queue Entries: 128 00:16:45.506 Contiguous Queues Required: Yes 00:16:45.506 Arbitration Mechanisms Supported 00:16:45.506 Weighted Round Robin: Not Supported 00:16:45.506 Vendor Specific: Not Supported 00:16:45.506 Reset Timeout: 15000 ms 00:16:45.506 Doorbell Stride: 4 bytes 00:16:45.506 NVM Subsystem Reset: Not Supported 00:16:45.506 Command Sets Supported 00:16:45.506 NVM Command Set: Supported 00:16:45.506 Boot Partition: Not Supported 00:16:45.506 Memory Page Size Minimum: 4096 bytes 00:16:45.506 Memory Page Size Maximum: 4096 bytes 00:16:45.506 Persistent Memory Region: Not Supported 00:16:45.506 Optional Asynchronous Events Supported 00:16:45.506 Namespace Attribute Notices: Not Supported 00:16:45.506 Firmware Activation Notices: Not Supported 00:16:45.506 ANA Change Notices: Not Supported 00:16:45.506 PLE Aggregate Log Change Notices: Not Supported 00:16:45.506 LBA Status Info Alert Notices: Not Supported 00:16:45.506 EGE Aggregate Log Change Notices: Not Supported 00:16:45.506 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.506 Zone Descriptor Change Notices: Not Supported 00:16:45.506 Discovery Log Change Notices: Supported 00:16:45.506 Controller Attributes 00:16:45.506 128-bit Host Identifier: Not Supported 00:16:45.506 Non-Operational Permissive Mode: Not Supported 00:16:45.506 NVM Sets: Not Supported 00:16:45.506 Read Recovery Levels: Not Supported 00:16:45.506 Endurance Groups: Not Supported 00:16:45.506 Predictable Latency Mode: Not Supported 00:16:45.506 Traffic Based Keep ALive: Not Supported 00:16:45.506 Namespace Granularity: Not Supported 00:16:45.506 SQ Associations: Not Supported 00:16:45.506 UUID List: Not Supported 00:16:45.506 Multi-Domain Subsystem: Not Supported 00:16:45.506 Fixed Capacity Management: Not Supported 00:16:45.506 Variable Capacity Management: Not Supported 00:16:45.506 Delete Endurance Group: Not Supported 00:16:45.506 Delete NVM Set: Not Supported 00:16:45.506 Extended LBA Formats Supported: Not Supported 00:16:45.506 Flexible Data Placement Supported: Not Supported 00:16:45.506 00:16:45.506 Controller Memory Buffer Support 00:16:45.506 ================================ 00:16:45.506 Supported: No 00:16:45.506 00:16:45.506 Persistent Memory Region Support 00:16:45.506 ================================ 00:16:45.506 Supported: No 00:16:45.506 00:16:45.506 Admin Command Set Attributes 00:16:45.506 ============================ 00:16:45.506 Security Send/Receive: Not Supported 00:16:45.506 Format NVM: Not Supported 00:16:45.506 Firmware Activate/Download: Not Supported 00:16:45.506 Namespace Management: Not Supported 00:16:45.506 Device Self-Test: Not Supported 00:16:45.506 Directives: Not Supported 00:16:45.506 NVMe-MI: Not Supported 00:16:45.506 Virtualization Management: Not Supported 00:16:45.506 Doorbell Buffer Config: Not Supported 00:16:45.506 Get LBA Status Capability: Not Supported 00:16:45.506 Command & Feature Lockdown Capability: Not Supported 00:16:45.506 Abort Command Limit: 1 00:16:45.506 Async Event Request Limit: 4 00:16:45.506 Number of Firmware Slots: N/A 00:16:45.506 Firmware Slot 1 Read-Only: N/A 00:16:45.506 Firmware Activation Without Reset: N/A 00:16:45.506 Multiple Update Detection Support: N/A 00:16:45.506 Firmware Update Granularity: No Information Provided 00:16:45.506 Per-Namespace SMART Log: No 00:16:45.506 Asymmetric Namespace Access Log Page: Not Supported 00:16:45.506 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:45.506 Command Effects Log Page: Not Supported 00:16:45.506 Get Log Page Extended Data: Supported 00:16:45.506 Telemetry Log Pages: Not Supported 00:16:45.506 Persistent Event Log Pages: Not Supported 00:16:45.506 Supported Log Pages Log Page: May Support 00:16:45.506 Commands Supported & Effects Log Page: Not Supported 00:16:45.506 Feature Identifiers & Effects Log Page:May Support 00:16:45.506 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.506 Data Area 4 for Telemetry Log: Not Supported 00:16:45.506 Error Log Page Entries Supported: 128 00:16:45.506 Keep Alive: Not Supported 00:16:45.506 00:16:45.506 NVM Command Set Attributes 00:16:45.506 ========================== 00:16:45.506 Submission Queue Entry Size 00:16:45.506 Max: 1 00:16:45.506 Min: 1 00:16:45.506 Completion Queue Entry Size 00:16:45.506 Max: 1 00:16:45.506 Min: 1 00:16:45.506 Number of Namespaces: 0 00:16:45.506 Compare Command: Not Supported 00:16:45.506 Write Uncorrectable Command: Not Supported 00:16:45.506 Dataset Management Command: Not Supported 00:16:45.506 Write Zeroes Command: Not Supported 00:16:45.506 Set Features Save Field: Not Supported 00:16:45.506 Reservations: Not Supported 00:16:45.506 Timestamp: Not Supported 00:16:45.506 Copy: Not Supported 00:16:45.506 Volatile Write Cache: Not Present 00:16:45.506 Atomic Write Unit (Normal): 1 00:16:45.506 Atomic Write Unit (PFail): 1 00:16:45.506 Atomic Compare & Write Unit: 1 00:16:45.506 Fused Compare & Write: Supported 00:16:45.506 Scatter-Gather List 00:16:45.506 SGL Command Set: Supported 00:16:45.506 SGL Keyed: Supported 00:16:45.506 SGL Bit Bucket Descriptor: Not Supported 00:16:45.506 SGL Metadata Pointer: Not Supported 00:16:45.506 Oversized SGL: Not Supported 00:16:45.506 SGL Metadata Address: Not Supported 00:16:45.506 SGL Offset: Supported 00:16:45.506 Transport SGL Data Block: Not Supported 00:16:45.506 Replay Protected Memory Block: Not Supported 00:16:45.506 00:16:45.506 Firmware Slot Information 00:16:45.506 ========================= 00:16:45.506 Active slot: 0 00:16:45.506 00:16:45.506 00:16:45.506 Error Log 00:16:45.506 ========= 00:16:45.506 00:16:45.506 Active Namespaces 00:16:45.506 ================= 00:16:45.506 Discovery Log Page 00:16:45.506 ================== 00:16:45.506 Generation Counter: 2 00:16:45.506 Number of Records: 2 00:16:45.506 Record Format: 0 00:16:45.506 00:16:45.506 Discovery Log Entry 0 00:16:45.506 ---------------------- 00:16:45.506 Transport Type: 3 (TCP) 00:16:45.506 Address Family: 1 (IPv4) 00:16:45.506 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:45.506 Entry Flags: 00:16:45.506 Duplicate Returned Information: 1 00:16:45.506 Explicit Persistent Connection Support for Discovery: 1 00:16:45.506 Transport Requirements: 00:16:45.506 Secure Channel: Not Required 00:16:45.506 Port ID: 0 (0x0000) 00:16:45.506 Controller ID: 65535 (0xffff) 00:16:45.506 Admin Max SQ Size: 128 00:16:45.506 Transport Service Identifier: 4420 00:16:45.506 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:45.506 Transport Address: 10.0.0.3 00:16:45.506 Discovery Log Entry 1 00:16:45.506 ---------------------- 00:16:45.506 Transport Type: 3 (TCP) 00:16:45.506 Address Family: 1 (IPv4) 00:16:45.506 Subsystem Type: 2 (NVM Subsystem) 00:16:45.506 Entry Flags: 00:16:45.506 Duplicate Returned Information: 0 00:16:45.506 Explicit Persistent Connection Support for Discovery: 0 00:16:45.506 Transport Requirements: 00:16:45.506 Secure Channel: Not Required 00:16:45.506 Port ID: 0 (0x0000) 00:16:45.506 Controller ID: 65535 (0xffff) 00:16:45.506 Admin Max SQ Size: 128 00:16:45.506 Transport Service Identifier: 4420 00:16:45.506 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:45.506 Transport Address: 10.0.0.3 [2024-08-11 20:57:56.177352] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:45.506 [2024-08-11 20:57:56.177371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82600) on tqpair=0xd49930 00:16:45.506 [2024-08-11 20:57:56.177379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.506 [2024-08-11 20:57:56.177385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82780) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.507 [2024-08-11 20:57:56.177396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82900) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.507 [2024-08-11 20:57:56.177405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.507 [2024-08-11 20:57:56.177428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.177447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.177476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.177541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.177549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.177553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.177583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.177628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.177688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.177696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.177701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177711] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:45.507 [2024-08-11 20:57:56.177716] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:45.507 [2024-08-11 20:57:56.177727] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.177745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.177769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.177815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.177823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.177827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.177860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.177883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.177922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.177930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.177934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177949] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.177963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.177972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.177980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.178003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.178050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.178058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.178062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.178078] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.178095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.178117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.178164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.178172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.178176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.178192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.178209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.178231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.178272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.178280] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.178284] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.178300] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.178317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.178339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.178398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.178406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.178411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.178427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.178444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.178467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.178505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.178513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.178517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.178533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.178543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.178550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.178572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.182646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.182666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.182672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.182676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.182690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.182697] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.182701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd49930) 00:16:45.507 [2024-08-11 20:57:56.182710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.507 [2024-08-11 20:57:56.182738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a80, cid 3, qid 0 00:16:45.507 [2024-08-11 20:57:56.182784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.507 [2024-08-11 20:57:56.182792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.507 [2024-08-11 20:57:56.182796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.507 [2024-08-11 20:57:56.182800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd82a80) on tqpair=0xd49930 00:16:45.507 [2024-08-11 20:57:56.182810] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:16:45.508 00:16:45.508 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:45.508 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:45.508 [2024-08-11 20:57:56.229198] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:45.508 [2024-08-11 20:57:56.229242] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86064 ] 00:16:45.773 [2024-08-11 20:57:56.367209] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:45.773 [2024-08-11 20:57:56.367283] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:45.773 [2024-08-11 20:57:56.367291] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:45.773 [2024-08-11 20:57:56.367301] nvme_tcp.c:2363:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:45.773 [2024-08-11 20:57:56.367310] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:45.773 [2024-08-11 20:57:56.367407] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:45.773 [2024-08-11 20:57:56.367451] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1147930 0 00:16:45.773 [2024-08-11 20:57:56.374641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:45.773 [2024-08-11 20:57:56.374664] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:45.773 [2024-08-11 20:57:56.374685] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:45.773 [2024-08-11 20:57:56.374689] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:45.773 [2024-08-11 20:57:56.374725] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.374733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.374737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.374748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:45.773 [2024-08-11 20:57:56.374780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.773 [2024-08-11 20:57:56.382649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.773 [2024-08-11 20:57:56.382671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.773 [2024-08-11 20:57:56.382677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.382690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.773 [2024-08-11 20:57:56.382700] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:45.773 [2024-08-11 20:57:56.382708] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:45.773 [2024-08-11 20:57:56.382714] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:45.773 [2024-08-11 20:57:56.382728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.382734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.382738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.382746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.773 [2024-08-11 20:57:56.382775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.773 [2024-08-11 20:57:56.382841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.773 [2024-08-11 20:57:56.382860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.773 [2024-08-11 20:57:56.382864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.382868] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.773 [2024-08-11 20:57:56.382873] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:45.773 [2024-08-11 20:57:56.382881] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:45.773 [2024-08-11 20:57:56.382889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.382893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.382897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.382905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.773 [2024-08-11 20:57:56.382927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.773 [2024-08-11 20:57:56.383008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.773 [2024-08-11 20:57:56.383015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.773 [2024-08-11 20:57:56.383019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.773 [2024-08-11 20:57:56.383028] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:45.773 [2024-08-11 20:57:56.383037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:45.773 [2024-08-11 20:57:56.383045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.383060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.773 [2024-08-11 20:57:56.383081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.773 [2024-08-11 20:57:56.383135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.773 [2024-08-11 20:57:56.383142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.773 [2024-08-11 20:57:56.383146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.773 [2024-08-11 20:57:56.383155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:45.773 [2024-08-11 20:57:56.383166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.383182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.773 [2024-08-11 20:57:56.383216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.773 [2024-08-11 20:57:56.383261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.773 [2024-08-11 20:57:56.383268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.773 [2024-08-11 20:57:56.383271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.773 [2024-08-11 20:57:56.383281] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:45.773 [2024-08-11 20:57:56.383286] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:45.773 [2024-08-11 20:57:56.383294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:45.773 [2024-08-11 20:57:56.383399] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:45.773 [2024-08-11 20:57:56.383404] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:45.773 [2024-08-11 20:57:56.383412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.383428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.773 [2024-08-11 20:57:56.383450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.773 [2024-08-11 20:57:56.383495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.773 [2024-08-11 20:57:56.383502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.773 [2024-08-11 20:57:56.383506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.773 [2024-08-11 20:57:56.383515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:45.773 [2024-08-11 20:57:56.383525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383530] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.773 [2024-08-11 20:57:56.383534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.773 [2024-08-11 20:57:56.383541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.773 [2024-08-11 20:57:56.383563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.774 [2024-08-11 20:57:56.383624] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.774 [2024-08-11 20:57:56.383634] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.774 [2024-08-11 20:57:56.383637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.774 [2024-08-11 20:57:56.383647] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:45.774 [2024-08-11 20:57:56.383652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.383660] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:45.774 [2024-08-11 20:57:56.383673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.383683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.383695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.774 [2024-08-11 20:57:56.383721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.774 [2024-08-11 20:57:56.383809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.774 [2024-08-11 20:57:56.383817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.774 [2024-08-11 20:57:56.383821] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383825] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=4096, cccid=0 00:16:45.774 [2024-08-11 20:57:56.383829] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180600) on tqpair(0x1147930): expected_datao=0, payload_size=4096 00:16:45.774 [2024-08-11 20:57:56.383834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383845] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383855] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.774 [2024-08-11 20:57:56.383861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.774 [2024-08-11 20:57:56.383864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.774 [2024-08-11 20:57:56.383876] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:45.774 [2024-08-11 20:57:56.383882] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:45.774 [2024-08-11 20:57:56.383891] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:45.774 [2024-08-11 20:57:56.383897] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:45.774 [2024-08-11 20:57:56.383902] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:45.774 [2024-08-11 20:57:56.383907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.383916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.383924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.383933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.383941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.774 [2024-08-11 20:57:56.383982] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.774 [2024-08-11 20:57:56.384041] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.774 [2024-08-11 20:57:56.384048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.774 [2024-08-11 20:57:56.384052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.774 [2024-08-11 20:57:56.384064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.774 [2024-08-11 20:57:56.384085] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.774 [2024-08-11 20:57:56.384105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.774 [2024-08-11 20:57:56.384124] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.774 [2024-08-11 20:57:56.384142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.774 [2024-08-11 20:57:56.384200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180600, cid 0, qid 0 00:16:45.774 [2024-08-11 20:57:56.384209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180780, cid 1, qid 0 00:16:45.774 [2024-08-11 20:57:56.384214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180900, cid 2, qid 0 00:16:45.774 [2024-08-11 20:57:56.384218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.774 [2024-08-11 20:57:56.384223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.774 [2024-08-11 20:57:56.384302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.774 [2024-08-11 20:57:56.384310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.774 [2024-08-11 20:57:56.384314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.774 [2024-08-11 20:57:56.384323] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:45.774 [2024-08-11 20:57:56.384329] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384337] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384366] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.774 [2024-08-11 20:57:56.384404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.774 [2024-08-11 20:57:56.384454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.774 [2024-08-11 20:57:56.384461] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.774 [2024-08-11 20:57:56.384465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.774 [2024-08-11 20:57:56.384524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:45.774 [2024-08-11 20:57:56.384547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.774 [2024-08-11 20:57:56.384558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.774 [2024-08-11 20:57:56.384581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.774 [2024-08-11 20:57:56.384693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.774 [2024-08-11 20:57:56.384704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.774 [2024-08-11 20:57:56.384708] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384711] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=4096, cccid=4 00:16:45.774 [2024-08-11 20:57:56.384716] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180c00) on tqpair(0x1147930): expected_datao=0, payload_size=4096 00:16:45.774 [2024-08-11 20:57:56.384721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384728] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384732] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.774 [2024-08-11 20:57:56.384747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.774 [2024-08-11 20:57:56.384751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.774 [2024-08-11 20:57:56.384755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.774 [2024-08-11 20:57:56.384766] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:45.775 [2024-08-11 20:57:56.384780] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.384793] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.384814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.384818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.384826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.384853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.775 [2024-08-11 20:57:56.384926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.775 [2024-08-11 20:57:56.384934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.775 [2024-08-11 20:57:56.384938] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.384942] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=4096, cccid=4 00:16:45.775 [2024-08-11 20:57:56.384946] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180c00) on tqpair(0x1147930): expected_datao=0, payload_size=4096 00:16:45.775 [2024-08-11 20:57:56.384951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.384958] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.384962] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.384971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.384977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.384981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.384985] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385029] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.775 [2024-08-11 20:57:56.385135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.775 [2024-08-11 20:57:56.385143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.775 [2024-08-11 20:57:56.385147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385150] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=4096, cccid=4 00:16:45.775 [2024-08-11 20:57:56.385155] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180c00) on tqpair(0x1147930): expected_datao=0, payload_size=4096 00:16:45.775 [2024-08-11 20:57:56.385159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385166] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385170] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.385185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.385189] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385211] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385246] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:45.775 [2024-08-11 20:57:56.385251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:45.775 [2024-08-11 20:57:56.385257] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:45.775 [2024-08-11 20:57:56.385272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:45.775 [2024-08-11 20:57:56.385334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.775 [2024-08-11 20:57:56.385343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180d80, cid 5, qid 0 00:16:45.775 [2024-08-11 20:57:56.385401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.385409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.385413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.385430] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.385434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180d80) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180d80, cid 5, qid 0 00:16:45.775 [2024-08-11 20:57:56.385526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.385533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.385537] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180d80) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180d80, cid 5, qid 0 00:16:45.775 [2024-08-11 20:57:56.385660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.385670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.385674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180d80) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385690] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180d80, cid 5, qid 0 00:16:45.775 [2024-08-11 20:57:56.385776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.775 [2024-08-11 20:57:56.385783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.775 [2024-08-11 20:57:56.385787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180d80) on tqpair=0x1147930 00:16:45.775 [2024-08-11 20:57:56.385811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.775 [2024-08-11 20:57:56.385875] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1147930) 00:16:45.775 [2024-08-11 20:57:56.385881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.775 [2024-08-11 20:57:56.385906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180d80, cid 5, qid 0 00:16:45.775 [2024-08-11 20:57:56.385915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180c00, cid 4, qid 0 00:16:45.775 [2024-08-11 20:57:56.385920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180f00, cid 6, qid 0 00:16:45.776 [2024-08-11 20:57:56.385925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181080, cid 7, qid 0 00:16:45.776 [2024-08-11 20:57:56.386090] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.776 [2024-08-11 20:57:56.386099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.776 [2024-08-11 20:57:56.386103] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386107] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=8192, cccid=5 00:16:45.776 [2024-08-11 20:57:56.386111] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180d80) on tqpair(0x1147930): expected_datao=0, payload_size=8192 00:16:45.776 [2024-08-11 20:57:56.386116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386134] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386140] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.776 [2024-08-11 20:57:56.386151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.776 [2024-08-11 20:57:56.386155] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386159] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=512, cccid=4 00:16:45.776 [2024-08-11 20:57:56.386163] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180c00) on tqpair(0x1147930): expected_datao=0, payload_size=512 00:16:45.776 [2024-08-11 20:57:56.386167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386173] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386177] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386182] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.776 [2024-08-11 20:57:56.386188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.776 [2024-08-11 20:57:56.386191] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=512, cccid=6 00:16:45.776 [2024-08-11 20:57:56.386201] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1180f00) on tqpair(0x1147930): expected_datao=0, payload_size=512 00:16:45.776 [2024-08-11 20:57:56.386205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386211] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386214] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:45.776 [2024-08-11 20:57:56.386242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:45.776 [2024-08-11 20:57:56.386246] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386258] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1147930): datao=0, datal=4096, cccid=7 00:16:45.776 [2024-08-11 20:57:56.386262] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1181080) on tqpair(0x1147930): expected_datao=0, payload_size=4096 00:16:45.776 [2024-08-11 20:57:56.386266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386272] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386275] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386284] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.776 [2024-08-11 20:57:56.386290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.776 [2024-08-11 20:57:56.386294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386297] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180d80) on tqpair=0x1147930 00:16:45.776 [2024-08-11 20:57:56.386313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.776 [2024-08-11 20:57:56.386320] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.776 [2024-08-11 20:57:56.386324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180c00) on tqpair=0x1147930 00:16:45.776 [2024-08-11 20:57:56.386341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.776 [2024-08-11 20:57:56.386348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.776 [2024-08-11 20:57:56.386351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180f00) on tqpair=0x1147930 00:16:45.776 [2024-08-11 20:57:56.386362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.776 [2024-08-11 20:57:56.386370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.776 [2024-08-11 20:57:56.386374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.776 [2024-08-11 20:57:56.386378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181080) on tqpair=0x1147930 00:16:45.776 ===================================================== 00:16:45.776 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:45.776 ===================================================== 00:16:45.776 Controller Capabilities/Features 00:16:45.776 ================================ 00:16:45.776 Vendor ID: 8086 00:16:45.776 Subsystem Vendor ID: 8086 00:16:45.776 Serial Number: SPDK00000000000001 00:16:45.776 Model Number: SPDK bdev Controller 00:16:45.776 Firmware Version: 24.09 00:16:45.776 Recommended Arb Burst: 6 00:16:45.776 IEEE OUI Identifier: e4 d2 5c 00:16:45.776 Multi-path I/O 00:16:45.776 May have multiple subsystem ports: Yes 00:16:45.776 May have multiple controllers: Yes 00:16:45.776 Associated with SR-IOV VF: No 00:16:45.776 Max Data Transfer Size: 131072 00:16:45.776 Max Number of Namespaces: 32 00:16:45.776 Max Number of I/O Queues: 127 00:16:45.776 NVMe Specification Version (VS): 1.3 00:16:45.776 NVMe Specification Version (Identify): 1.3 00:16:45.776 Maximum Queue Entries: 128 00:16:45.776 Contiguous Queues Required: Yes 00:16:45.776 Arbitration Mechanisms Supported 00:16:45.776 Weighted Round Robin: Not Supported 00:16:45.776 Vendor Specific: Not Supported 00:16:45.776 Reset Timeout: 15000 ms 00:16:45.776 Doorbell Stride: 4 bytes 00:16:45.776 NVM Subsystem Reset: Not Supported 00:16:45.776 Command Sets Supported 00:16:45.776 NVM Command Set: Supported 00:16:45.776 Boot Partition: Not Supported 00:16:45.776 Memory Page Size Minimum: 4096 bytes 00:16:45.776 Memory Page Size Maximum: 4096 bytes 00:16:45.776 Persistent Memory Region: Not Supported 00:16:45.776 Optional Asynchronous Events Supported 00:16:45.776 Namespace Attribute Notices: Supported 00:16:45.776 Firmware Activation Notices: Not Supported 00:16:45.776 ANA Change Notices: Not Supported 00:16:45.776 PLE Aggregate Log Change Notices: Not Supported 00:16:45.776 LBA Status Info Alert Notices: Not Supported 00:16:45.776 EGE Aggregate Log Change Notices: Not Supported 00:16:45.776 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.776 Zone Descriptor Change Notices: Not Supported 00:16:45.776 Discovery Log Change Notices: Not Supported 00:16:45.776 Controller Attributes 00:16:45.776 128-bit Host Identifier: Supported 00:16:45.776 Non-Operational Permissive Mode: Not Supported 00:16:45.776 NVM Sets: Not Supported 00:16:45.776 Read Recovery Levels: Not Supported 00:16:45.776 Endurance Groups: Not Supported 00:16:45.776 Predictable Latency Mode: Not Supported 00:16:45.776 Traffic Based Keep ALive: Not Supported 00:16:45.776 Namespace Granularity: Not Supported 00:16:45.776 SQ Associations: Not Supported 00:16:45.776 UUID List: Not Supported 00:16:45.776 Multi-Domain Subsystem: Not Supported 00:16:45.776 Fixed Capacity Management: Not Supported 00:16:45.776 Variable Capacity Management: Not Supported 00:16:45.776 Delete Endurance Group: Not Supported 00:16:45.776 Delete NVM Set: Not Supported 00:16:45.776 Extended LBA Formats Supported: Not Supported 00:16:45.776 Flexible Data Placement Supported: Not Supported 00:16:45.776 00:16:45.776 Controller Memory Buffer Support 00:16:45.776 ================================ 00:16:45.776 Supported: No 00:16:45.776 00:16:45.776 Persistent Memory Region Support 00:16:45.776 ================================ 00:16:45.776 Supported: No 00:16:45.776 00:16:45.776 Admin Command Set Attributes 00:16:45.776 ============================ 00:16:45.776 Security Send/Receive: Not Supported 00:16:45.776 Format NVM: Not Supported 00:16:45.776 Firmware Activate/Download: Not Supported 00:16:45.776 Namespace Management: Not Supported 00:16:45.776 Device Self-Test: Not Supported 00:16:45.776 Directives: Not Supported 00:16:45.776 NVMe-MI: Not Supported 00:16:45.776 Virtualization Management: Not Supported 00:16:45.776 Doorbell Buffer Config: Not Supported 00:16:45.776 Get LBA Status Capability: Not Supported 00:16:45.776 Command & Feature Lockdown Capability: Not Supported 00:16:45.776 Abort Command Limit: 4 00:16:45.776 Async Event Request Limit: 4 00:16:45.776 Number of Firmware Slots: N/A 00:16:45.776 Firmware Slot 1 Read-Only: N/A 00:16:45.776 Firmware Activation Without Reset: N/A 00:16:45.776 Multiple Update Detection Support: N/A 00:16:45.776 Firmware Update Granularity: No Information Provided 00:16:45.776 Per-Namespace SMART Log: No 00:16:45.776 Asymmetric Namespace Access Log Page: Not Supported 00:16:45.776 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:45.776 Command Effects Log Page: Supported 00:16:45.776 Get Log Page Extended Data: Supported 00:16:45.776 Telemetry Log Pages: Not Supported 00:16:45.776 Persistent Event Log Pages: Not Supported 00:16:45.776 Supported Log Pages Log Page: May Support 00:16:45.776 Commands Supported & Effects Log Page: Not Supported 00:16:45.776 Feature Identifiers & Effects Log Page:May Support 00:16:45.776 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.776 Data Area 4 for Telemetry Log: Not Supported 00:16:45.776 Error Log Page Entries Supported: 128 00:16:45.776 Keep Alive: Supported 00:16:45.776 Keep Alive Granularity: 10000 ms 00:16:45.776 00:16:45.776 NVM Command Set Attributes 00:16:45.776 ========================== 00:16:45.776 Submission Queue Entry Size 00:16:45.776 Max: 64 00:16:45.776 Min: 64 00:16:45.776 Completion Queue Entry Size 00:16:45.777 Max: 16 00:16:45.777 Min: 16 00:16:45.777 Number of Namespaces: 32 00:16:45.777 Compare Command: Supported 00:16:45.777 Write Uncorrectable Command: Not Supported 00:16:45.777 Dataset Management Command: Supported 00:16:45.777 Write Zeroes Command: Supported 00:16:45.777 Set Features Save Field: Not Supported 00:16:45.777 Reservations: Supported 00:16:45.777 Timestamp: Not Supported 00:16:45.777 Copy: Supported 00:16:45.777 Volatile Write Cache: Present 00:16:45.777 Atomic Write Unit (Normal): 1 00:16:45.777 Atomic Write Unit (PFail): 1 00:16:45.777 Atomic Compare & Write Unit: 1 00:16:45.777 Fused Compare & Write: Supported 00:16:45.777 Scatter-Gather List 00:16:45.777 SGL Command Set: Supported 00:16:45.777 SGL Keyed: Supported 00:16:45.777 SGL Bit Bucket Descriptor: Not Supported 00:16:45.777 SGL Metadata Pointer: Not Supported 00:16:45.777 Oversized SGL: Not Supported 00:16:45.777 SGL Metadata Address: Not Supported 00:16:45.777 SGL Offset: Supported 00:16:45.777 Transport SGL Data Block: Not Supported 00:16:45.777 Replay Protected Memory Block: Not Supported 00:16:45.777 00:16:45.777 Firmware Slot Information 00:16:45.777 ========================= 00:16:45.777 Active slot: 1 00:16:45.777 Slot 1 Firmware Revision: 24.09 00:16:45.777 00:16:45.777 00:16:45.777 Commands Supported and Effects 00:16:45.777 ============================== 00:16:45.777 Admin Commands 00:16:45.777 -------------- 00:16:45.777 Get Log Page (02h): Supported 00:16:45.777 Identify (06h): Supported 00:16:45.777 Abort (08h): Supported 00:16:45.777 Set Features (09h): Supported 00:16:45.777 Get Features (0Ah): Supported 00:16:45.777 Asynchronous Event Request (0Ch): Supported 00:16:45.777 Keep Alive (18h): Supported 00:16:45.777 I/O Commands 00:16:45.777 ------------ 00:16:45.777 Flush (00h): Supported LBA-Change 00:16:45.777 Write (01h): Supported LBA-Change 00:16:45.777 Read (02h): Supported 00:16:45.777 Compare (05h): Supported 00:16:45.777 Write Zeroes (08h): Supported LBA-Change 00:16:45.777 Dataset Management (09h): Supported LBA-Change 00:16:45.777 Copy (19h): Supported LBA-Change 00:16:45.777 00:16:45.777 Error Log 00:16:45.777 ========= 00:16:45.777 00:16:45.777 Arbitration 00:16:45.777 =========== 00:16:45.777 Arbitration Burst: 1 00:16:45.777 00:16:45.777 Power Management 00:16:45.777 ================ 00:16:45.777 Number of Power States: 1 00:16:45.777 Current Power State: Power State #0 00:16:45.777 Power State #0: 00:16:45.777 Max Power: 0.00 W 00:16:45.777 Non-Operational State: Operational 00:16:45.777 Entry Latency: Not Reported 00:16:45.777 Exit Latency: Not Reported 00:16:45.777 Relative Read Throughput: 0 00:16:45.777 Relative Read Latency: 0 00:16:45.777 Relative Write Throughput: 0 00:16:45.777 Relative Write Latency: 0 00:16:45.777 Idle Power: Not Reported 00:16:45.777 Active Power: Not Reported 00:16:45.777 Non-Operational Permissive Mode: Not Supported 00:16:45.777 00:16:45.777 Health Information 00:16:45.777 ================== 00:16:45.777 Critical Warnings: 00:16:45.777 Available Spare Space: OK 00:16:45.777 Temperature: OK 00:16:45.777 Device Reliability: OK 00:16:45.777 Read Only: No 00:16:45.777 Volatile Memory Backup: OK 00:16:45.777 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:45.777 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:45.777 Available Spare: 0% 00:16:45.777 Available Spare Threshold: 0% 00:16:45.777 Life Percentage Used:[2024-08-11 20:57:56.386469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.386478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1147930) 00:16:45.777 [2024-08-11 20:57:56.386486] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.777 [2024-08-11 20:57:56.386512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1181080, cid 7, qid 0 00:16:45.777 [2024-08-11 20:57:56.386564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.777 [2024-08-11 20:57:56.386571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.777 [2024-08-11 20:57:56.386575] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.386579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1181080) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.390662] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:45.777 [2024-08-11 20:57:56.390688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180600) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.390697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.777 [2024-08-11 20:57:56.390704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180780) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.390709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.777 [2024-08-11 20:57:56.390714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180900) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.390719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.777 [2024-08-11 20:57:56.390724] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:45.777 [2024-08-11 20:57:56.390739] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.390744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.390748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.777 [2024-08-11 20:57:56.390756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.777 [2024-08-11 20:57:56.390785] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.777 [2024-08-11 20:57:56.390840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.777 [2024-08-11 20:57:56.390848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.777 [2024-08-11 20:57:56.390852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.390857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.390865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.390869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.390873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.777 [2024-08-11 20:57:56.390881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.777 [2024-08-11 20:57:56.390924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.777 [2024-08-11 20:57:56.391003] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.777 [2024-08-11 20:57:56.391011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.777 [2024-08-11 20:57:56.391015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.391019] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.391024] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:45.777 [2024-08-11 20:57:56.391029] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:45.777 [2024-08-11 20:57:56.391040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.391046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.391050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.777 [2024-08-11 20:57:56.391058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.777 [2024-08-11 20:57:56.391080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.777 [2024-08-11 20:57:56.391126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.777 [2024-08-11 20:57:56.391134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.777 [2024-08-11 20:57:56.391138] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.777 [2024-08-11 20:57:56.391143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.777 [2024-08-11 20:57:56.391155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391275] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391434] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391512] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391850] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.391888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.391911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.391956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.391963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.391967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.391983] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.391993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.392000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.392023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.392063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.392071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.392075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.392090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.392108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.392130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.392176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.392190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.392195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.392212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.392230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.392253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.392295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.392302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.392306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.392322] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.392340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.392362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.392406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.392413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.392417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.392433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392443] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.778 [2024-08-11 20:57:56.392450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.778 [2024-08-11 20:57:56.392473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.778 [2024-08-11 20:57:56.392513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.778 [2024-08-11 20:57:56.392520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.778 [2024-08-11 20:57:56.392524] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.778 [2024-08-11 20:57:56.392540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.778 [2024-08-11 20:57:56.392550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.392558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.392580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.392634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.392644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.392648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.392665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.392683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.392708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.392751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.392759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.392763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.392780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.392797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.392820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.392866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.392874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.392878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.392894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.392911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.392935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.392974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.392981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.392985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.392990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393092] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393113] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393298] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393326] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393331] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393636] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393708] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393766] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.393862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.779 [2024-08-11 20:57:56.393875] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.779 [2024-08-11 20:57:56.393880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.779 [2024-08-11 20:57:56.393897] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.779 [2024-08-11 20:57:56.393907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.779 [2024-08-11 20:57:56.393915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.779 [2024-08-11 20:57:56.393947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.779 [2024-08-11 20:57:56.394014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394042] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394136] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394140] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394249] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394398] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394437] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394727] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394743] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394846] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394863] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.394897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.394938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.394946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.394950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.394965] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394971] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.394974] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.394982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.395004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.395044] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.395052] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.395055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.395071] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395076] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.395088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.395109] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.395150] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.395158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.395162] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.395177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.395194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.395229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.395275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.395283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.395287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395291] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.780 [2024-08-11 20:57:56.395302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.780 [2024-08-11 20:57:56.395312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.780 [2024-08-11 20:57:56.395319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.780 [2024-08-11 20:57:56.395341] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.780 [2024-08-11 20:57:56.395384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.780 [2024-08-11 20:57:56.395392] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.780 [2024-08-11 20:57:56.395396] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.395411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.395428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.395450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.395490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.395497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.395501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.395517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.395534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.395556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.395607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.395616] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.395620] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.395637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.395654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.395677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.395726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.395733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.395737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.395753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.395770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.395793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.395833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.395840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.395844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.395860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.395876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.395898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.395944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.395952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.395956] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.395971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.395981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.395988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.396080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.396096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.396194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.396210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396232] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.396319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.396336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396359] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.396426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.396443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.396535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.396552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396637] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.781 [2024-08-11 20:57:56.396658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396663] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.781 [2024-08-11 20:57:56.396667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.781 [2024-08-11 20:57:56.396675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.781 [2024-08-11 20:57:56.396699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.781 [2024-08-11 20:57:56.396742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.781 [2024-08-11 20:57:56.396750] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.781 [2024-08-11 20:57:56.396754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396758] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.396769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.396786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.396808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.396847] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.396855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.396859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.396874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.396891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.396914] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.396951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.396959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.396963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.396978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.396988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.396995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397100] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397185] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397284] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397313] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397412] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397430] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397465] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397522] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397664] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397680] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397689] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.397870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.782 [2024-08-11 20:57:56.397882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.782 [2024-08-11 20:57:56.397887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.782 [2024-08-11 20:57:56.397903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.782 [2024-08-11 20:57:56.397913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.782 [2024-08-11 20:57:56.397921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.782 [2024-08-11 20:57:56.397976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.782 [2024-08-11 20:57:56.398042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.398058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.398063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.398080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.783 [2024-08-11 20:57:56.398097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.783 [2024-08-11 20:57:56.398121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.783 [2024-08-11 20:57:56.398163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.398171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.398175] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.398191] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.783 [2024-08-11 20:57:56.398208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.783 [2024-08-11 20:57:56.398230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.783 [2024-08-11 20:57:56.398270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.398278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.398282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.398297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.783 [2024-08-11 20:57:56.398314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.783 [2024-08-11 20:57:56.398336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.783 [2024-08-11 20:57:56.398400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.398415] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.398421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.398437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.783 [2024-08-11 20:57:56.398454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.783 [2024-08-11 20:57:56.398477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.783 [2024-08-11 20:57:56.398517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.398525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.398529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.398545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.398554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.783 [2024-08-11 20:57:56.398562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.783 [2024-08-11 20:57:56.398584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.783 [2024-08-11 20:57:56.402401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.402442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.402449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.402469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.402486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.402492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.402496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1147930) 00:16:45.783 [2024-08-11 20:57:56.402505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:45.783 [2024-08-11 20:57:56.402537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1180a80, cid 3, qid 0 00:16:45.783 [2024-08-11 20:57:56.402592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:45.783 [2024-08-11 20:57:56.402600] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:45.783 [2024-08-11 20:57:56.402604] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:45.783 [2024-08-11 20:57:56.402608] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1180a80) on tqpair=0x1147930 00:16:45.783 [2024-08-11 20:57:56.402617] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 11 milliseconds 00:16:45.783 0% 00:16:45.783 Data Units Read: 0 00:16:45.783 Data Units Written: 0 00:16:45.783 Host Read Commands: 0 00:16:45.783 Host Write Commands: 0 00:16:45.783 Controller Busy Time: 0 minutes 00:16:45.783 Power Cycles: 0 00:16:45.783 Power On Hours: 0 hours 00:16:45.783 Unsafe Shutdowns: 0 00:16:45.783 Unrecoverable Media Errors: 0 00:16:45.783 Lifetime Error Log Entries: 0 00:16:45.783 Warning Temperature Time: 0 minutes 00:16:45.783 Critical Temperature Time: 0 minutes 00:16:45.783 00:16:45.783 Number of Queues 00:16:45.783 ================ 00:16:45.783 Number of I/O Submission Queues: 127 00:16:45.783 Number of I/O Completion Queues: 127 00:16:45.783 00:16:45.783 Active Namespaces 00:16:45.783 ================= 00:16:45.783 Namespace ID:1 00:16:45.783 Error Recovery Timeout: Unlimited 00:16:45.783 Command Set Identifier: NVM (00h) 00:16:45.783 Deallocate: Supported 00:16:45.783 Deallocated/Unwritten Error: Not Supported 00:16:45.783 Deallocated Read Value: Unknown 00:16:45.783 Deallocate in Write Zeroes: Not Supported 00:16:45.783 Deallocated Guard Field: 0xFFFF 00:16:45.783 Flush: Supported 00:16:45.783 Reservation: Supported 00:16:45.783 Namespace Sharing Capabilities: Multiple Controllers 00:16:45.783 Size (in LBAs): 131072 (0GiB) 00:16:45.783 Capacity (in LBAs): 131072 (0GiB) 00:16:45.783 Utilization (in LBAs): 131072 (0GiB) 00:16:45.783 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:45.783 EUI64: ABCDEF0123456789 00:16:45.783 UUID: 58f5dd42-e7ce-4bb4-aabe-f9fdaad98086 00:16:45.783 Thin Provisioning: Not Supported 00:16:45.783 Per-NS Atomic Units: Yes 00:16:45.783 Atomic Boundary Size (Normal): 0 00:16:45.783 Atomic Boundary Size (PFail): 0 00:16:45.783 Atomic Boundary Offset: 0 00:16:45.783 Maximum Single Source Range Length: 65535 00:16:45.783 Maximum Copy Length: 65535 00:16:45.783 Maximum Source Range Count: 1 00:16:45.783 NGUID/EUI64 Never Reused: No 00:16:45.783 Namespace Write Protected: No 00:16:45.783 Number of LBA Formats: 1 00:16:45.783 Current LBA Format: LBA Format #00 00:16:45.783 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:45.783 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@557 -- # xtrace_disable 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@508 -- # nvmfcleanup 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.783 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.783 rmmod nvme_tcp 00:16:45.783 rmmod nvme_fabrics 00:16:45.783 rmmod nvme_keyring 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@509 -- # '[' -n 86027 ']' 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@510 -- # killprocess 86027 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 86027 ']' 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 86027 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86027 00:16:46.043 killing process with pid 86027 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86027' 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@965 -- # kill 86027 00:16:46.043 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # wait 86027 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # iptr 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@783 -- # iptables-save 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@783 -- # iptables-restore 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:16:46.301 20:57:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # remove_spdk_ns 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.301 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # return 0 00:16:46.561 00:16:46.561 real 0m2.951s 00:16:46.561 user 0m7.566s 00:16:46.561 sys 0m0.846s 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:46.561 ************************************ 00:16:46.561 END TEST nvmf_identify 00:16:46.561 ************************************ 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.561 ************************************ 00:16:46.561 START TEST nvmf_perf 00:16:46.561 ************************************ 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:46.561 * Looking for test storage... 00:16:46.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.561 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # prepare_net_devs 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # local -g is_hw=no 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # remove_spdk_ns 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # nvmf_veth_init 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:16:46.562 Cannot find device "nvmf_init_br" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:16:46.562 Cannot find device "nvmf_init_br2" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:16:46.562 Cannot find device "nvmf_tgt_br" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.562 Cannot find device "nvmf_tgt_br2" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:16:46.562 Cannot find device "nvmf_init_br" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:16:46.562 Cannot find device "nvmf_init_br2" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:16:46.562 Cannot find device "nvmf_tgt_br" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:16:46.562 Cannot find device "nvmf_tgt_br2" 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:46.562 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:16:46.822 Cannot find device "nvmf_br" 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:16:46.822 Cannot find device "nvmf_init_if" 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:16:46.822 Cannot find device "nvmf_init_if2" 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.822 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.822 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:16:47.085 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.085 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:47.085 00:16:47.085 --- 10.0.0.3 ping statistics --- 00:16:47.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.085 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:16:47.085 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:47.085 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:16:47.085 00:16:47.085 --- 10.0.0.4 ping statistics --- 00:16:47.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.085 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:47.085 00:16:47.085 --- 10.0.0.1 ping statistics --- 00:16:47.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.085 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:47.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:47.085 00:16:47.085 --- 10.0.0.2 ping statistics --- 00:16:47.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.085 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@453 -- # return 0 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # nvmfpid=86278 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # waitforlisten 86278 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 86278 ']' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:47.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:47.085 20:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:47.085 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:47.085 [2024-08-11 20:57:57.715286] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:16:47.085 [2024-08-11 20:57:57.715393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.085 [2024-08-11 20:57:57.856755] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.353 [2024-08-11 20:57:57.909852] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.353 [2024-08-11 20:57:57.909907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.353 [2024-08-11 20:57:57.909917] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.353 [2024-08-11 20:57:57.909924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.353 [2024-08-11 20:57:57.909931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.353 [2024-08-11 20:57:57.910010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.353 [2024-08-11 20:57:57.910126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.353 [2024-08-11 20:57:57.910240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.353 [2024-08-11 20:57:57.910248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.353 [2024-08-11 20:57:57.963975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:47.353 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:47.921 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:47.921 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:48.179 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:48.179 20:57:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:48.438 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:48.438 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:48.438 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:48.438 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:48.438 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:48.697 [2024-08-11 20:57:59.329710] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.697 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:48.956 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:48.956 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:49.214 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:49.215 20:57:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:49.473 20:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:49.732 [2024-08-11 20:58:00.299588] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:49.732 20:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:49.992 20:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:49.992 20:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:49.992 20:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:49.992 20:58:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:49.992 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:50.927 Initializing NVMe Controllers 00:16:50.927 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:50.927 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:50.927 Initialization complete. Launching workers. 00:16:50.927 ======================================================== 00:16:50.927 Latency(us) 00:16:50.927 Device Information : IOPS MiB/s Average min max 00:16:50.927 PCIE (0000:00:10.0) NSID 1 from core 0: 23454.26 91.62 1364.93 383.14 7935.13 00:16:50.927 ======================================================== 00:16:50.927 Total : 23454.26 91.62 1364.93 383.14 7935.13 00:16:50.927 00:16:50.927 20:58:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:50.927 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:52.304 Initializing NVMe Controllers 00:16:52.304 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:52.304 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:52.304 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:52.304 Initialization complete. Launching workers. 00:16:52.304 ======================================================== 00:16:52.304 Latency(us) 00:16:52.304 Device Information : IOPS MiB/s Average min max 00:16:52.304 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3374.37 13.18 296.07 103.63 5086.09 00:16:52.304 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.77 6038.21 12029.68 00:16:52.304 ======================================================== 00:16:52.304 Total : 3497.87 13.66 573.76 103.63 12029.68 00:16:52.304 00:16:52.304 20:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:52.304 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:53.681 Initializing NVMe Controllers 00:16:53.681 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:53.681 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:53.681 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:53.681 Initialization complete. Launching workers. 00:16:53.681 ======================================================== 00:16:53.681 Latency(us) 00:16:53.681 Device Information : IOPS MiB/s Average min max 00:16:53.681 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9098.23 35.54 3522.36 546.80 7667.85 00:16:53.681 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3934.94 15.37 8144.08 4365.07 17006.71 00:16:53.681 ======================================================== 00:16:53.681 Total : 13033.17 50.91 4917.74 546.80 17006.71 00:16:53.681 00:16:53.681 20:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:53.681 20:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:53.681 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:56.216 Initializing NVMe Controllers 00:16:56.216 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.216 Controller IO queue size 128, less than required. 00:16:56.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:56.216 Controller IO queue size 128, less than required. 00:16:56.216 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:56.216 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:56.216 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:56.216 Initialization complete. Launching workers. 00:16:56.216 ======================================================== 00:16:56.216 Latency(us) 00:16:56.216 Device Information : IOPS MiB/s Average min max 00:16:56.216 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1792.99 448.25 71820.96 39554.47 107228.73 00:16:56.216 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.00 168.00 197733.32 71943.20 320059.57 00:16:56.216 ======================================================== 00:16:56.216 Total : 2464.98 616.25 106146.77 39554.47 320059.57 00:16:56.216 00:16:56.216 20:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:16:56.216 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:56.474 Initializing NVMe Controllers 00:16:56.474 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:56.474 Controller IO queue size 128, less than required. 00:16:56.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:56.474 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:56.474 Controller IO queue size 128, less than required. 00:16:56.474 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:56.474 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:56.474 WARNING: Some requested NVMe devices were skipped 00:16:56.474 No valid NVMe controllers or AIO or URING devices found 00:16:56.474 20:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:16:56.474 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:16:59.008 Initializing NVMe Controllers 00:16:59.008 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:59.008 Controller IO queue size 128, less than required. 00:16:59.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:59.008 Controller IO queue size 128, less than required. 00:16:59.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:59.008 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:59.008 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:59.008 Initialization complete. Launching workers. 00:16:59.008 00:16:59.008 ==================== 00:16:59.008 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:59.008 TCP transport: 00:16:59.008 polls: 9792 00:16:59.008 idle_polls: 5775 00:16:59.008 sock_completions: 4017 00:16:59.008 nvme_completions: 6789 00:16:59.008 submitted_requests: 10168 00:16:59.008 queued_requests: 1 00:16:59.008 00:16:59.008 ==================== 00:16:59.008 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:59.008 TCP transport: 00:16:59.008 polls: 9862 00:16:59.008 idle_polls: 5491 00:16:59.008 sock_completions: 4371 00:16:59.008 nvme_completions: 7075 00:16:59.008 submitted_requests: 10552 00:16:59.008 queued_requests: 1 00:16:59.008 ======================================================== 00:16:59.008 Latency(us) 00:16:59.008 Device Information : IOPS MiB/s Average min max 00:16:59.008 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1696.90 424.22 76907.34 50811.02 129687.87 00:16:59.008 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1768.39 442.10 73180.59 35031.96 113633.45 00:16:59.008 ======================================================== 00:16:59.008 Total : 3465.29 866.32 75005.52 35031.96 129687.87 00:16:59.008 00:16:59.008 20:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:59.008 20:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.266 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:16:59.266 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:16:59.266 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=61cb6d8b-627a-4d7b-ac55-7428a8590b32 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 61cb6d8b-627a-4d7b-ac55-7428a8590b32 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=61cb6d8b-627a-4d7b-ac55-7428a8590b32 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:16:59.525 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:59.783 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:16:59.783 { 00:16:59.783 "uuid": "61cb6d8b-627a-4d7b-ac55-7428a8590b32", 00:16:59.783 "name": "lvs_0", 00:16:59.783 "base_bdev": "Nvme0n1", 00:16:59.783 "total_data_clusters": 1278, 00:16:59.783 "free_clusters": 1278, 00:16:59.783 "block_size": 4096, 00:16:59.783 "cluster_size": 4194304 00:16:59.783 } 00:16:59.783 ]' 00:16:59.783 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="61cb6d8b-627a-4d7b-ac55-7428a8590b32") .free_clusters' 00:16:59.783 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:16:59.783 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="61cb6d8b-627a-4d7b-ac55-7428a8590b32") .cluster_size' 00:17:00.042 5112 00:17:00.042 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:17:00.042 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:17:00.042 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:17:00.042 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:00.042 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 61cb6d8b-627a-4d7b-ac55-7428a8590b32 lbd_0 5112 00:17:00.300 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b25f48ea-dd55-4eeb-b80e-e2d452eac221 00:17:00.300 20:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b25f48ea-dd55-4eeb-b80e-e2d452eac221 lvs_n_0 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8415358c-c22a-4fda-9ec2-6180db736e35 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8415358c-c22a-4fda-9ec2-6180db736e35 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=8415358c-c22a-4fda-9ec2-6180db736e35 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:17:00.558 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:00.817 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:17:00.817 { 00:17:00.817 "uuid": "61cb6d8b-627a-4d7b-ac55-7428a8590b32", 00:17:00.817 "name": "lvs_0", 00:17:00.817 "base_bdev": "Nvme0n1", 00:17:00.817 "total_data_clusters": 1278, 00:17:00.817 "free_clusters": 0, 00:17:00.817 "block_size": 4096, 00:17:00.817 "cluster_size": 4194304 00:17:00.817 }, 00:17:00.817 { 00:17:00.817 "uuid": "8415358c-c22a-4fda-9ec2-6180db736e35", 00:17:00.817 "name": "lvs_n_0", 00:17:00.817 "base_bdev": "b25f48ea-dd55-4eeb-b80e-e2d452eac221", 00:17:00.817 "total_data_clusters": 1276, 00:17:00.817 "free_clusters": 1276, 00:17:00.817 "block_size": 4096, 00:17:00.817 "cluster_size": 4194304 00:17:00.817 } 00:17:00.817 ]' 00:17:00.817 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="8415358c-c22a-4fda-9ec2-6180db736e35") .free_clusters' 00:17:00.817 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:17:00.817 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="8415358c-c22a-4fda-9ec2-6180db736e35") .cluster_size' 00:17:01.075 5104 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8415358c-c22a-4fda-9ec2-6180db736e35 lbd_nest_0 5104 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=c222b510-1eb5-4bc7-b60f-b1f35e738d33 00:17:01.075 20:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:01.642 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:01.642 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 c222b510-1eb5-4bc7-b60f-b1f35e738d33 00:17:01.642 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:01.901 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:01.901 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:01.901 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:01.901 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:01.901 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:01.901 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:02.160 Initializing NVMe Controllers 00:17:02.160 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.160 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:02.160 WARNING: Some requested NVMe devices were skipped 00:17:02.160 No valid NVMe controllers or AIO or URING devices found 00:17:02.160 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:02.160 20:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:02.160 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:14.368 Initializing NVMe Controllers 00:17:14.368 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:14.368 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:14.368 Initialization complete. Launching workers. 00:17:14.368 ======================================================== 00:17:14.368 Latency(us) 00:17:14.368 Device Information : IOPS MiB/s Average min max 00:17:14.368 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 852.20 106.52 1173.08 383.32 8525.49 00:17:14.368 ======================================================== 00:17:14.368 Total : 852.20 106.52 1173.08 383.32 8525.49 00:17:14.368 00:17:14.368 20:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:14.368 20:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:14.368 20:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:14.368 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:14.368 Initializing NVMe Controllers 00:17:14.368 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:14.368 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:14.368 WARNING: Some requested NVMe devices were skipped 00:17:14.368 No valid NVMe controllers or AIO or URING devices found 00:17:14.368 20:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:14.368 20:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:14.368 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:24.349 Initializing NVMe Controllers 00:17:24.349 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.349 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:24.349 Initialization complete. Launching workers. 00:17:24.349 ======================================================== 00:17:24.349 Latency(us) 00:17:24.349 Device Information : IOPS MiB/s Average min max 00:17:24.349 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1344.86 168.11 23808.32 5462.50 73062.63 00:17:24.349 ======================================================== 00:17:24.349 Total : 1344.86 168.11 23808.32 5462.50 73062.63 00:17:24.349 00:17:24.349 20:58:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:24.349 20:58:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:24.350 20:58:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:24.350 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:24.350 Initializing NVMe Controllers 00:17:24.350 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.350 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:24.350 WARNING: Some requested NVMe devices were skipped 00:17:24.350 No valid NVMe controllers or AIO or URING devices found 00:17:24.350 20:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:24.350 20:58:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:24.350 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:34.414 Initializing NVMe Controllers 00:17:34.414 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.414 Controller IO queue size 128, less than required. 00:17:34.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:34.414 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:34.414 Initialization complete. Launching workers. 00:17:34.414 ======================================================== 00:17:34.414 Latency(us) 00:17:34.414 Device Information : IOPS MiB/s Average min max 00:17:34.414 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4443.40 555.42 28861.59 11375.65 57379.70 00:17:34.414 ======================================================== 00:17:34.414 Total : 4443.40 555.42 28861.59 11375.65 57379.70 00:17:34.414 00:17:34.414 20:58:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.415 20:58:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c222b510-1eb5-4bc7-b60f-b1f35e738d33 00:17:34.415 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:17:34.673 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b25f48ea-dd55-4eeb-b80e-e2d452eac221 00:17:34.932 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # nvmfcleanup 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.191 rmmod nvme_tcp 00:17:35.191 rmmod nvme_fabrics 00:17:35.191 rmmod nvme_keyring 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # '[' -n 86278 ']' 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # killprocess 86278 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 86278 ']' 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 86278 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86278 00:17:35.191 killing process with pid 86278 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86278' 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@965 -- # kill 86278 00:17:35.191 20:58:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # wait 86278 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # iptr 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@783 -- # iptables-save 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@783 -- # iptables-restore 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:17:37.095 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # remove_spdk_ns 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # return 0 00:17:37.096 00:17:37.096 real 0m50.578s 00:17:37.096 user 3m9.893s 00:17:37.096 sys 0m11.436s 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.096 ************************************ 00:17:37.096 END TEST nvmf_perf 00:17:37.096 ************************************ 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.096 ************************************ 00:17:37.096 START TEST nvmf_fio_host 00:17:37.096 ************************************ 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:37.096 * Looking for test storage... 00:17:37.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:37.096 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # prepare_net_devs 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # local -g is_hw=no 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # remove_spdk_ns 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # nvmf_veth_init 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:37.355 Cannot find device "nvmf_init_br" 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:17:37.355 Cannot find device "nvmf_init_br2" 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:17:37.355 Cannot find device "nvmf_tgt_br" 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # true 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:17:37.355 Cannot find device "nvmf_tgt_br2" 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # true 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:17:37.355 Cannot find device "nvmf_init_br" 00:17:37.355 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:17:37.356 Cannot find device "nvmf_init_br2" 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:17:37.356 Cannot find device "nvmf_tgt_br" 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:17:37.356 Cannot find device "nvmf_tgt_br2" 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:37.356 20:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:17:37.356 Cannot find device "nvmf_br" 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:17:37.356 Cannot find device "nvmf_init_if" 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:17:37.356 Cannot find device "nvmf_init_if2" 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:37.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:37.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:17:37.356 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:17:37.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:37.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:37.615 00:17:37.615 --- 10.0.0.3 ping statistics --- 00:17:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.615 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:17:37.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:37.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:17:37.615 00:17:37.615 --- 10.0.0.4 ping statistics --- 00:17:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.615 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:37.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:17:37.615 00:17:37.615 --- 10.0.0.1 ping statistics --- 00:17:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.615 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:37.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:37.615 00:17:37.615 --- 10.0.0.2 ping statistics --- 00:17:37.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.615 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@453 -- # return 0 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87132 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87132 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 87132 ']' 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.615 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.616 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.616 20:58:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.616 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:37.616 [2024-08-11 20:58:48.367335] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:37.616 [2024-08-11 20:58:48.367429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.874 [2024-08-11 20:58:48.505003] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.874 [2024-08-11 20:58:48.566120] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.874 [2024-08-11 20:58:48.566389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.874 [2024-08-11 20:58:48.566454] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.874 [2024-08-11 20:58:48.566549] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.874 [2024-08-11 20:58:48.566623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.874 [2024-08-11 20:58:48.566828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.874 [2024-08-11 20:58:48.566914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.874 [2024-08-11 20:58:48.567486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.874 [2024-08-11 20:58:48.567493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.874 [2024-08-11 20:58:48.618302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.810 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.810 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:17:38.810 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:38.810 [2024-08-11 20:58:49.567532] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.068 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:39.068 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:39.068 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.068 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:39.327 Malloc1 00:17:39.327 20:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:39.585 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:39.585 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:39.844 [2024-08-11 20:58:50.582397] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:39.844 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:40.102 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:40.103 20:58:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:40.362 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:40.362 fio-3.35 00:17:40.362 Starting 1 thread 00:17:40.362 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:42.893 00:17:42.893 test: (groupid=0, jobs=1): err= 0: pid=87210: Sun Aug 11 20:58:53 2024 00:17:42.893 read: IOPS=10.2k, BW=39.7MiB/s (41.6MB/s)(79.6MiB/2006msec) 00:17:42.893 slat (nsec): min=1536, max=411588, avg=1985.26, stdev=3319.86 00:17:42.893 clat (usec): min=1905, max=12328, avg=6559.03, stdev=552.04 00:17:42.893 lat (usec): min=1929, max=12330, avg=6561.01, stdev=551.81 00:17:42.893 clat percentiles (usec): 00:17:42.893 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6194], 00:17:42.893 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:17:42.893 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7373], 00:17:42.893 | 99.00th=[ 7898], 99.50th=[ 8455], 99.90th=[11731], 99.95th=[11863], 00:17:42.893 | 99.99th=[12387] 00:17:42.893 bw ( KiB/s): min=39592, max=41128, per=99.95%, avg=40636.00, stdev=713.17, samples=4 00:17:42.893 iops : min= 9898, max=10282, avg=10159.00, stdev=178.29, samples=4 00:17:42.893 write: IOPS=10.2k, BW=39.8MiB/s (41.7MB/s)(79.8MiB/2006msec); 0 zone resets 00:17:42.893 slat (nsec): min=1601, max=136707, avg=2075.50, stdev=2027.98 00:17:42.893 clat (usec): min=1819, max=12080, avg=5968.78, stdev=511.65 00:17:42.893 lat (usec): min=1834, max=12081, avg=5970.86, stdev=511.52 00:17:42.893 clat percentiles (usec): 00:17:42.893 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5669], 00:17:42.893 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:17:42.893 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6652], 00:17:42.893 | 99.00th=[ 7177], 99.50th=[ 7635], 99.90th=[10814], 99.95th=[11207], 00:17:42.893 | 99.99th=[11994] 00:17:42.893 bw ( KiB/s): min=40072, max=41152, per=100.00%, avg=40722.00, stdev=484.88, samples=4 00:17:42.893 iops : min=10018, max=10288, avg=10180.50, stdev=121.22, samples=4 00:17:42.893 lat (msec) : 2=0.02%, 4=0.34%, 10=99.35%, 20=0.29% 00:17:42.893 cpu : usr=70.47%, sys=22.49%, ctx=19, majf=0, minf=7 00:17:42.893 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:42.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.893 issued rwts: total=20389,20417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.893 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.893 00:17:42.893 Run status group 0 (all jobs): 00:17:42.893 READ: bw=39.7MiB/s (41.6MB/s), 39.7MiB/s-39.7MiB/s (41.6MB/s-41.6MB/s), io=79.6MiB (83.5MB), run=2006-2006msec 00:17:42.893 WRITE: bw=39.8MiB/s (41.7MB/s), 39.8MiB/s-39.8MiB/s (41.7MB/s-41.7MB/s), io=79.8MiB (83.6MB), run=2006-2006msec 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:42.893 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:42.894 20:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:42.894 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:42.894 fio-3.35 00:17:42.894 Starting 1 thread 00:17:42.894 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:45.427 00:17:45.427 test: (groupid=0, jobs=1): err= 0: pid=87259: Sun Aug 11 20:58:55 2024 00:17:45.427 read: IOPS=9021, BW=141MiB/s (148MB/s)(283MiB/2004msec) 00:17:45.427 slat (usec): min=2, max=137, avg= 3.51, stdev= 2.75 00:17:45.427 clat (usec): min=2368, max=16554, avg=7950.86, stdev=2295.12 00:17:45.427 lat (usec): min=2370, max=16558, avg=7954.38, stdev=2295.26 00:17:45.427 clat percentiles (usec): 00:17:45.427 | 1.00th=[ 3687], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 5997], 00:17:45.427 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8356], 00:17:45.427 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[11076], 95.00th=[12125], 00:17:45.427 | 99.00th=[14091], 99.50th=[14877], 99.90th=[15401], 99.95th=[15795], 00:17:45.427 | 99.99th=[16188] 00:17:45.427 bw ( KiB/s): min=68384, max=74208, per=49.14%, avg=70928.00, stdev=2420.39, samples=4 00:17:45.427 iops : min= 4274, max= 4638, avg=4433.00, stdev=151.27, samples=4 00:17:45.427 write: IOPS=5193, BW=81.2MiB/s (85.1MB/s)(144MiB/1774msec); 0 zone resets 00:17:45.427 slat (usec): min=28, max=341, avg=35.82, stdev=10.82 00:17:45.427 clat (usec): min=4404, max=19409, avg=11400.02, stdev=1931.88 00:17:45.427 lat (usec): min=4453, max=19455, avg=11435.83, stdev=1933.44 00:17:45.427 clat percentiles (usec): 00:17:45.427 | 1.00th=[ 7439], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9634], 00:17:45.427 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:17:45.427 | 70.00th=[12518], 80.00th=[13173], 90.00th=[13960], 95.00th=[14484], 00:17:45.427 | 99.00th=[15664], 99.50th=[15926], 99.90th=[18744], 99.95th=[19006], 00:17:45.427 | 99.99th=[19530] 00:17:45.427 bw ( KiB/s): min=71392, max=76160, per=88.70%, avg=73712.00, stdev=2146.39, samples=4 00:17:45.427 iops : min= 4462, max= 4760, avg=4607.00, stdev=134.15, samples=4 00:17:45.427 lat (msec) : 4=1.31%, 10=61.46%, 20=37.23% 00:17:45.427 cpu : usr=78.38%, sys=15.93%, ctx=5, majf=0, minf=3 00:17:45.427 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:17:45.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:45.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:45.427 issued rwts: total=18080,9214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:45.427 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:45.427 00:17:45.427 Run status group 0 (all jobs): 00:17:45.427 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=283MiB (296MB), run=2004-2004msec 00:17:45.427 WRITE: bw=81.2MiB/s (85.1MB/s), 81.2MiB/s-81.2MiB/s (85.1MB/s-85.1MB/s), io=144MiB (151MB), run=1774-1774msec 00:17:45.427 20:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:45.427 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:17:45.686 Nvme0n1 00:17:45.945 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=a2c54e78-ad89-4b88-a9cb-7d7ad10a433f 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb a2c54e78-ad89-4b88-a9cb-7d7ad10a433f 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=a2c54e78-ad89-4b88-a9cb-7d7ad10a433f 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:17:46.204 20:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:17:46.464 { 00:17:46.464 "uuid": "a2c54e78-ad89-4b88-a9cb-7d7ad10a433f", 00:17:46.464 "name": "lvs_0", 00:17:46.464 "base_bdev": "Nvme0n1", 00:17:46.464 "total_data_clusters": 4, 00:17:46.464 "free_clusters": 4, 00:17:46.464 "block_size": 4096, 00:17:46.464 "cluster_size": 1073741824 00:17:46.464 } 00:17:46.464 ]' 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="a2c54e78-ad89-4b88-a9cb-7d7ad10a433f") .free_clusters' 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="a2c54e78-ad89-4b88-a9cb-7d7ad10a433f") .cluster_size' 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:17:46.464 4096 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:17:46.464 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:17:46.723 92692f49-5da7-46dd-a490-6457683d2ae6 00:17:46.723 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:17:46.982 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:17:47.241 20:58:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:47.500 20:58:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:47.759 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:47.759 fio-3.35 00:17:47.759 Starting 1 thread 00:17:47.759 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:50.307 00:17:50.307 test: (groupid=0, jobs=1): err= 0: pid=87369: Sun Aug 11 20:59:00 2024 00:17:50.307 read: IOPS=6434, BW=25.1MiB/s (26.4MB/s)(50.5MiB/2008msec) 00:17:50.307 slat (nsec): min=1602, max=366109, avg=2830.53, stdev=5040.08 00:17:50.307 clat (usec): min=3011, max=18048, avg=10373.61, stdev=858.57 00:17:50.307 lat (usec): min=3021, max=18050, avg=10376.44, stdev=858.24 00:17:50.307 clat percentiles (usec): 00:17:50.307 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:17:50.307 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:17:50.307 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:17:50.307 | 99.00th=[12256], 99.50th=[12518], 99.90th=[16909], 99.95th=[17695], 00:17:50.307 | 99.99th=[17957] 00:17:50.307 bw ( KiB/s): min=24976, max=26312, per=99.89%, avg=25710.00, stdev=627.35, samples=4 00:17:50.307 iops : min= 6244, max= 6578, avg=6427.50, stdev=156.84, samples=4 00:17:50.307 write: IOPS=6435, BW=25.1MiB/s (26.4MB/s)(50.5MiB/2008msec); 0 zone resets 00:17:50.307 slat (nsec): min=1700, max=286226, avg=2881.47, stdev=3868.14 00:17:50.307 clat (usec): min=2585, max=18076, avg=9441.29, stdev=813.04 00:17:50.307 lat (usec): min=2600, max=18078, avg=9444.17, stdev=812.85 00:17:50.307 clat percentiles (usec): 00:17:50.307 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8848], 00:17:50.307 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:17:50.307 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:17:50.307 | 99.00th=[11207], 99.50th=[11600], 99.90th=[15401], 99.95th=[16909], 00:17:50.307 | 99.99th=[17957] 00:17:50.307 bw ( KiB/s): min=25344, max=26120, per=99.92%, avg=25720.00, stdev=324.44, samples=4 00:17:50.307 iops : min= 6336, max= 6530, avg=6430.00, stdev=81.11, samples=4 00:17:50.307 lat (msec) : 4=0.06%, 10=55.19%, 20=44.75% 00:17:50.307 cpu : usr=70.30%, sys=22.92%, ctx=5, majf=0, minf=7 00:17:50.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:50.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:50.307 issued rwts: total=12920,12922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:50.307 00:17:50.307 Run status group 0 (all jobs): 00:17:50.307 READ: bw=25.1MiB/s (26.4MB/s), 25.1MiB/s-25.1MiB/s (26.4MB/s-26.4MB/s), io=50.5MiB (52.9MB), run=2008-2008msec 00:17:50.307 WRITE: bw=25.1MiB/s (26.4MB/s), 25.1MiB/s-25.1MiB/s (26.4MB/s-26.4MB/s), io=50.5MiB (52.9MB), run=2008-2008msec 00:17:50.307 20:59:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:50.307 20:59:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2aa0d13d-cd03-48cf-b0f1-59e2ad491c95 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2aa0d13d-cd03-48cf-b0f1-59e2ad491c95 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=2aa0d13d-cd03-48cf-b0f1-59e2ad491c95 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:17:50.566 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:50.825 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:17:50.825 { 00:17:50.825 "uuid": "a2c54e78-ad89-4b88-a9cb-7d7ad10a433f", 00:17:50.825 "name": "lvs_0", 00:17:50.825 "base_bdev": "Nvme0n1", 00:17:50.825 "total_data_clusters": 4, 00:17:50.825 "free_clusters": 0, 00:17:50.825 "block_size": 4096, 00:17:50.825 "cluster_size": 1073741824 00:17:50.825 }, 00:17:50.825 { 00:17:50.825 "uuid": "2aa0d13d-cd03-48cf-b0f1-59e2ad491c95", 00:17:50.825 "name": "lvs_n_0", 00:17:50.825 "base_bdev": "92692f49-5da7-46dd-a490-6457683d2ae6", 00:17:50.825 "total_data_clusters": 1022, 00:17:50.825 "free_clusters": 1022, 00:17:50.825 "block_size": 4096, 00:17:50.825 "cluster_size": 4194304 00:17:50.825 } 00:17:50.825 ]' 00:17:50.825 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2aa0d13d-cd03-48cf-b0f1-59e2ad491c95") .free_clusters' 00:17:50.825 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:17:50.825 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2aa0d13d-cd03-48cf-b0f1-59e2ad491c95") .cluster_size' 00:17:51.084 4088 00:17:51.084 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:17:51.084 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:17:51.084 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:17:51.084 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:17:51.342 9b659651-1e4e-4807-8fae-83e45b767a1d 00:17:51.342 20:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:17:51.601 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:17:51.860 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:52.119 20:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:52.119 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:52.119 fio-3.35 00:17:52.119 Starting 1 thread 00:17:52.119 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:54.655 00:17:54.655 test: (groupid=0, jobs=1): err= 0: pid=87447: Sun Aug 11 20:59:05 2024 00:17:54.655 read: IOPS=4963, BW=19.4MiB/s (20.3MB/s)(39.0MiB/2011msec) 00:17:54.656 slat (nsec): min=1621, max=292136, avg=2815.91, stdev=4859.40 00:17:54.656 clat (usec): min=3778, max=23387, avg=13505.58, stdev=1182.08 00:17:54.656 lat (usec): min=3786, max=23389, avg=13508.40, stdev=1181.78 00:17:54.656 clat percentiles (usec): 00:17:54.656 | 1.00th=[10945], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:17:54.656 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:17:54.656 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:17:54.656 | 99.00th=[16188], 99.50th=[16581], 99.90th=[21365], 99.95th=[21627], 00:17:54.656 | 99.99th=[23462] 00:17:54.656 bw ( KiB/s): min=18464, max=20504, per=99.87%, avg=19828.00, stdev=940.03, samples=4 00:17:54.656 iops : min= 4616, max= 5126, avg=4957.00, stdev=235.01, samples=4 00:17:54.656 write: IOPS=4954, BW=19.4MiB/s (20.3MB/s)(38.9MiB/2011msec); 0 zone resets 00:17:54.656 slat (nsec): min=1650, max=245044, avg=2905.17, stdev=3878.97 00:17:54.656 clat (usec): min=2524, max=21716, avg=12209.83, stdev=1124.19 00:17:54.656 lat (usec): min=2537, max=21718, avg=12212.73, stdev=1124.08 00:17:54.656 clat percentiles (usec): 00:17:54.656 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:17:54.656 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:17:54.656 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13829], 00:17:54.656 | 99.00th=[14746], 99.50th=[15401], 99.90th=[20055], 99.95th=[21365], 00:17:54.656 | 99.99th=[21627] 00:17:54.656 bw ( KiB/s): min=19464, max=19968, per=99.95%, avg=19810.00, stdev=238.43, samples=4 00:17:54.656 iops : min= 4866, max= 4992, avg=4952.50, stdev=59.61, samples=4 00:17:54.656 lat (msec) : 4=0.04%, 10=0.76%, 20=99.08%, 50=0.13% 00:17:54.656 cpu : usr=75.27%, sys=20.30%, ctx=5, majf=0, minf=7 00:17:54.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:54.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.656 issued rwts: total=9982,9964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.656 00:17:54.656 Run status group 0 (all jobs): 00:17:54.656 READ: bw=19.4MiB/s (20.3MB/s), 19.4MiB/s-19.4MiB/s (20.3MB/s-20.3MB/s), io=39.0MiB (40.9MB), run=2011-2011msec 00:17:54.656 WRITE: bw=19.4MiB/s (20.3MB/s), 19.4MiB/s-19.4MiB/s (20.3MB/s-20.3MB/s), io=38.9MiB (40.8MB), run=2011-2011msec 00:17:54.656 20:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:54.656 20:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:17:54.914 20:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:17:55.173 20:59:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:17:55.432 20:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:17:55.690 20:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:17:55.949 20:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@508 -- # nvmfcleanup 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.516 rmmod nvme_tcp 00:17:56.516 rmmod nvme_fabrics 00:17:56.516 rmmod nvme_keyring 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@509 -- # '[' -n 87132 ']' 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@510 -- # killprocess 87132 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 87132 ']' 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 87132 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87132 00:17:56.516 killing process with pid 87132 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87132' 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 87132 00:17:56.516 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 87132 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # iptr 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@783 -- # iptables-save 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@783 -- # iptables-restore 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.775 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.034 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # remove_spdk_ns 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # return 0 00:17:57.035 00:17:57.035 real 0m19.833s 00:17:57.035 user 1m26.035s 00:17:57.035 sys 0m4.778s 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.035 ************************************ 00:17:57.035 END TEST nvmf_fio_host 00:17:57.035 ************************************ 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.035 ************************************ 00:17:57.035 START TEST nvmf_failover 00:17:57.035 ************************************ 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:57.035 * Looking for test storage... 00:17:57.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # prepare_net_devs 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # local -g is_hw=no 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # remove_spdk_ns 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # nvmf_veth_init 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:17:57.035 Cannot find device "nvmf_init_br" 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:17:57.035 Cannot find device "nvmf_init_br2" 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:17:57.035 Cannot find device "nvmf_tgt_br" 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # true 00:17:57.035 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.294 Cannot find device "nvmf_tgt_br2" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:17:57.294 Cannot find device "nvmf_init_br" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:17:57.294 Cannot find device "nvmf_init_br2" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:17:57.294 Cannot find device "nvmf_tgt_br" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:17:57.294 Cannot find device "nvmf_tgt_br2" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:17:57.294 Cannot find device "nvmf_br" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:17:57.294 Cannot find device "nvmf_init_if" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:17:57.294 Cannot find device "nvmf_init_if2" 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:17:57.294 20:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.294 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:17:57.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:57.553 00:17:57.553 --- 10.0.0.3 ping statistics --- 00:17:57.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.553 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:17:57.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:57.553 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:57.553 00:17:57.553 --- 10.0.0.4 ping statistics --- 00:17:57.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.553 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:17:57.553 00:17:57.553 --- 10.0.0.1 ping statistics --- 00:17:57.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.553 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:57.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:17:57.553 00:17:57.553 --- 10.0.0.2 ping statistics --- 00:17:57.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.553 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@453 -- # return 0 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # nvmfpid=87724 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # waitforlisten 87724 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 87724 ']' 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:57.553 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.553 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:17:57.553 [2024-08-11 20:59:08.198484] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:17:57.553 [2024-08-11 20:59:08.198743] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.811 [2024-08-11 20:59:08.337990] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.811 [2024-08-11 20:59:08.394270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.811 [2024-08-11 20:59:08.394623] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.811 [2024-08-11 20:59:08.394767] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.811 [2024-08-11 20:59:08.394898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.811 [2024-08-11 20:59:08.394931] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.811 [2024-08-11 20:59:08.395206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.811 [2024-08-11 20:59:08.395318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.811 [2024-08-11 20:59:08.395324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.811 [2024-08-11 20:59:08.448536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.811 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:58.069 [2024-08-11 20:59:08.740521] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.069 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:58.327 Malloc0 00:17:58.327 20:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.585 20:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.843 20:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:59.101 [2024-08-11 20:59:09.828087] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:59.101 20:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:59.359 [2024-08-11 20:59:10.076439] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:59.359 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:17:59.618 [2024-08-11 20:59:10.352848] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87774 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87774 /var/tmp/bdevperf.sock 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 87774 ']' 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.618 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:00.185 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.185 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:00.185 20:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.443 NVMe0n1 00:18:00.443 20:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.701 00:18:00.701 20:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87790 00:18:00.701 20:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:00.701 20:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:01.634 20:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:01.892 20:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:05.174 20:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:05.432 00:18:05.432 20:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:05.690 20:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:08.981 20:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:08.982 [2024-08-11 20:59:19.519933] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:08.982 20:59:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:09.958 20:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:10.217 20:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87790 00:18:16.785 0 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87774 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 87774 ']' 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 87774 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87774 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:16.785 killing process with pid 87774 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87774' 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@965 -- # kill 87774 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # wait 87774 00:18:16.785 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:16.785 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:18:16.785 [2024-08-11 20:59:10.418541] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:18:16.785 [2024-08-11 20:59:10.418658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87774 ] 00:18:16.785 [2024-08-11 20:59:10.551607] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.785 [2024-08-11 20:59:10.607986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.785 [2024-08-11 20:59:10.660948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.785 Running I/O for 15 seconds... 00:18:16.785 [2024-08-11 20:59:12.645519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.785 [2024-08-11 20:59:12.645972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.785 [2024-08-11 20:59:12.645995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.646534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.646980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.646994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.647006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.786 [2024-08-11 20:59:12.647035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.647067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.647098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.647125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.647151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.786 [2024-08-11 20:59:12.647178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.786 [2024-08-11 20:59:12.647192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.787 [2024-08-11 20:59:12.647257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.647977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.647990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.787 [2024-08-11 20:59:12.648340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.787 [2024-08-11 20:59:12.648354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.648976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.648990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.788 [2024-08-11 20:59:12.649326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550b10 is same with the state(6) to be set 00:18:16.788 [2024-08-11 20:59:12.649354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.788 [2024-08-11 20:59:12.649364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.788 [2024-08-11 20:59:12.649374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75944 len:8 PRP1 0x0 PRP2 0x0 00:18:16.788 [2024-08-11 20:59:12.649390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649445] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1550b10 was disconnected and freed. reset controller. 00:18:16.788 [2024-08-11 20:59:12.649463] bdev_nvme.c:1861:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:16.788 [2024-08-11 20:59:12.649511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.788 [2024-08-11 20:59:12.649531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.788 [2024-08-11 20:59:12.649558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.788 [2024-08-11 20:59:12.649583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.788 [2024-08-11 20:59:12.649622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.788 [2024-08-11 20:59:12.649634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.788 [2024-08-11 20:59:12.652982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.788 [2024-08-11 20:59:12.653018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f700 (9): Bad file descriptor 00:18:16.788 [2024-08-11 20:59:12.684202] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:16.789 [2024-08-11 20:59:16.290457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.290781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.290977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.290991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.789 [2024-08-11 20:59:16.291004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.789 [2024-08-11 20:59:16.291495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.789 [2024-08-11 20:59:16.291509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.291837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.291974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.291998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.790 [2024-08-11 20:59:16.292365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.790 [2024-08-11 20:59:16.292733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.790 [2024-08-11 20:59:16.292746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.292773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.292807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.292835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.292863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.292890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.292917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.292951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.292978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.292993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.791 [2024-08-11 20:59:16.293260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.791 [2024-08-11 20:59:16.293700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553ca0 is same with the state(6) to be set 00:18:16.791 [2024-08-11 20:59:16.293729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.791 [2024-08-11 20:59:16.293740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.791 [2024-08-11 20:59:16.293750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122432 len:8 PRP1 0x0 PRP2 0x0 00:18:16.791 [2024-08-11 20:59:16.293763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.791 [2024-08-11 20:59:16.293787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.791 [2024-08-11 20:59:16.293796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122992 len:8 PRP1 0x0 PRP2 0x0 00:18:16.791 [2024-08-11 20:59:16.293809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.791 [2024-08-11 20:59:16.293831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.791 [2024-08-11 20:59:16.293841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123000 len:8 PRP1 0x0 PRP2 0x0 00:18:16.791 [2024-08-11 20:59:16.293853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.791 [2024-08-11 20:59:16.293875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.791 [2024-08-11 20:59:16.293891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123008 len:8 PRP1 0x0 PRP2 0x0 00:18:16.791 [2024-08-11 20:59:16.293905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.791 [2024-08-11 20:59:16.293918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.791 [2024-08-11 20:59:16.293928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.293937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123016 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.293950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.293962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.293971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123024 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123032 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123040 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123048 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123056 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123064 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123072 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122440 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122448 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122456 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122464 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122472 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122480 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122488 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.792 [2024-08-11 20:59:16.294666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.792 [2024-08-11 20:59:16.294676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122496 len:8 PRP1 0x0 PRP2 0x0 00:18:16.792 [2024-08-11 20:59:16.294696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294750] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1553ca0 was disconnected and freed. reset controller. 00:18:16.792 [2024-08-11 20:59:16.294767] bdev_nvme.c:1861:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:16.792 [2024-08-11 20:59:16.294818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.792 [2024-08-11 20:59:16.294838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.792 [2024-08-11 20:59:16.294865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.792 [2024-08-11 20:59:16.294891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.792 [2024-08-11 20:59:16.294917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:16.294929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.792 [2024-08-11 20:59:16.294973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f700 (9): Bad file descriptor 00:18:16.792 [2024-08-11 20:59:16.298341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.792 [2024-08-11 20:59:16.326907] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:16.792 [2024-08-11 20:59:20.815729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.815798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.815841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.815855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.815894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.815920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.815932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.815946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.815982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.815997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.816010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.816024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.816036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.792 [2024-08-11 20:59:20.816050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.792 [2024-08-11 20:59:20.816062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.816951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.816978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.793 [2024-08-11 20:59:20.817180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.817207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.793 [2024-08-11 20:59:20.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.793 [2024-08-11 20:59:20.817233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.794 [2024-08-11 20:59:20.817633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.817975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.817996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.794 [2024-08-11 20:59:20.818249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.794 [2024-08-11 20:59:20.818263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.795 [2024-08-11 20:59:20.818562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.818984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:16.795 [2024-08-11 20:59:20.818996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1553ca0 is same with the state(6) to be set 00:18:16.795 [2024-08-11 20:59:20.819025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111744 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112200 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112208 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112216 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112224 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112232 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112240 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112248 len:8 PRP1 0x0 PRP2 0x0 00:18:16.795 [2024-08-11 20:59:20.819372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.795 [2024-08-11 20:59:20.819384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.795 [2024-08-11 20:59:20.819393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.795 [2024-08-11 20:59:20.819402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112256 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112264 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112272 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112280 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112288 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112296 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112304 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112312 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:16.796 [2024-08-11 20:59:20.819773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:16.796 [2024-08-11 20:59:20.819782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112320 len:8 PRP1 0x0 PRP2 0x0 00:18:16.796 [2024-08-11 20:59:20.819798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819851] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1553ca0 was disconnected and freed. reset controller. 00:18:16.796 [2024-08-11 20:59:20.819868] bdev_nvme.c:1861:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:16.796 [2024-08-11 20:59:20.819941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.796 [2024-08-11 20:59:20.819960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.796 [2024-08-11 20:59:20.819986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.819999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.796 [2024-08-11 20:59:20.820025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.820039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:16.796 [2024-08-11 20:59:20.820052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:16.796 [2024-08-11 20:59:20.820064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.796 [2024-08-11 20:59:20.823381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:16.796 [2024-08-11 20:59:20.823417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f700 (9): Bad file descriptor 00:18:16.796 [2024-08-11 20:59:20.856120] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:16.796 00:18:16.796 Latency(us) 00:18:16.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.796 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:16.796 Verification LBA range: start 0x0 length 0x4000 00:18:16.796 NVMe0n1 : 15.01 9415.02 36.78 231.11 0.00 13241.86 510.14 16086.11 00:18:16.796 =================================================================================================================== 00:18:16.796 Total : 9415.02 36.78 231.11 0.00 13241.86 510.14 16086.11 00:18:16.796 Received shutdown signal, test time was about 15.000000 seconds 00:18:16.796 00:18:16.796 Latency(us) 00:18:16.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.796 =================================================================================================================== 00:18:16.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:16.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87963 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87963 /var/tmp/bdevperf.sock 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 87963 ']' 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:16.796 20:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:16.796 20:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:16.796 20:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:16.796 20:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:16.796 [2024-08-11 20:59:27.316347] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:16.796 20:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:17.055 [2024-08-11 20:59:27.576473] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:17.055 20:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:17.314 NVMe0n1 00:18:17.314 20:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:17.572 00:18:17.572 20:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:17.831 00:18:17.831 20:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:17.831 20:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:18.089 20:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:18.348 20:59:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:21.633 20:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:21.633 20:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:21.891 20:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88038 00:18:21.891 20:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.891 20:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88038 00:18:22.825 0 00:18:22.825 20:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:22.825 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:18:22.825 [2024-08-11 20:59:26.771681] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:18:22.825 [2024-08-11 20:59:26.771806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87963 ] 00:18:22.825 [2024-08-11 20:59:26.911431] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.825 [2024-08-11 20:59:26.963094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.825 [2024-08-11 20:59:27.013726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.825 [2024-08-11 20:59:29.088426] bdev_nvme.c:1861:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:22.825 [2024-08-11 20:59:29.088523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.825 [2024-08-11 20:59:29.088547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.825 [2024-08-11 20:59:29.088562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.825 [2024-08-11 20:59:29.088575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.825 [2024-08-11 20:59:29.088588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.825 [2024-08-11 20:59:29.088615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.825 [2024-08-11 20:59:29.088641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.825 [2024-08-11 20:59:29.088655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.825 [2024-08-11 20:59:29.088668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:22.825 [2024-08-11 20:59:29.088706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:22.825 [2024-08-11 20:59:29.088734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x610700 (9): Bad file descriptor 00:18:22.825 [2024-08-11 20:59:29.092717] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:22.825 Running I/O for 1 seconds... 00:18:22.825 00:18:22.825 Latency(us) 00:18:22.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.825 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:22.825 Verification LBA range: start 0x0 length 0x4000 00:18:22.825 NVMe0n1 : 1.01 7895.20 30.84 0.00 0.00 16152.69 2010.76 15073.28 00:18:22.825 =================================================================================================================== 00:18:22.825 Total : 7895.20 30.84 0.00 0.00 16152.69 2010.76 15073.28 00:18:22.825 20:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:22.825 20:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:23.083 20:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:23.648 20:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.648 20:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:23.905 20:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:24.163 20:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87963 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 87963 ']' 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 87963 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87963 00:18:27.448 killing process with pid 87963 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87963' 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@965 -- # kill 87963 00:18:27.448 20:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # wait 87963 00:18:27.448 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:27.448 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # nvmfcleanup 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.015 rmmod nvme_tcp 00:18:28.015 rmmod nvme_fabrics 00:18:28.015 rmmod nvme_keyring 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # '[' -n 87724 ']' 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # killprocess 87724 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 87724 ']' 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 87724 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87724 00:18:28.015 killing process with pid 87724 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87724' 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@965 -- # kill 87724 00:18:28.015 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # wait 87724 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # iptr 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@783 -- # iptables-save 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@783 -- # iptables-restore 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:18:28.274 20:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.274 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.274 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # remove_spdk_ns 00:18:28.274 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.274 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.274 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # return 0 00:18:28.534 00:18:28.534 real 0m31.414s 00:18:28.534 user 2m1.270s 00:18:28.534 sys 0m5.640s 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:28.534 ************************************ 00:18:28.534 END TEST nvmf_failover 00:18:28.534 ************************************ 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.534 ************************************ 00:18:28.534 START TEST nvmf_host_discovery 00:18:28.534 ************************************ 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:28.534 * Looking for test storage... 00:18:28.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # prepare_net_devs 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # local -g is_hw=no 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # remove_spdk_ns 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # nvmf_veth_init 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:28.534 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:18:28.535 Cannot find device "nvmf_init_br" 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:18:28.535 Cannot find device "nvmf_init_br2" 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:18:28.535 Cannot find device "nvmf_tgt_br" 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # true 00:18:28.535 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.794 Cannot find device "nvmf_tgt_br2" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:18:28.794 Cannot find device "nvmf_init_br" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:18:28.794 Cannot find device "nvmf_init_br2" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:18:28.794 Cannot find device "nvmf_tgt_br" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:18:28.794 Cannot find device "nvmf_tgt_br2" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:18:28.794 Cannot find device "nvmf_br" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:18:28.794 Cannot find device "nvmf_init_if" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:18:28.794 Cannot find device "nvmf_init_if2" 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:18:28.794 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:18:29.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:29.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:29.054 00:18:29.054 --- 10.0.0.3 ping statistics --- 00:18:29.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.054 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:18:29.054 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:29.054 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:18:29.054 00:18:29.054 --- 10.0.0.4 ping statistics --- 00:18:29.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.054 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:29.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:18:29.054 00:18:29.054 --- 10.0.0.1 ping statistics --- 00:18:29.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.054 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:29.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:18:29.054 00:18:29.054 --- 10.0.0.2 ping statistics --- 00:18:29.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.054 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.054 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@453 -- # return 0 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@501 -- # nvmfpid=88360 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # waitforlisten 88360 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 88360 ']' 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.055 20:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:29.055 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:18:29.055 [2024-08-11 20:59:39.734704] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:18:29.055 [2024-08-11 20:59:39.734828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.313 [2024-08-11 20:59:39.874321] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.313 [2024-08-11 20:59:39.936759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.313 [2024-08-11 20:59:39.936821] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.313 [2024-08-11 20:59:39.936835] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.314 [2024-08-11 20:59:39.936845] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.314 [2024-08-11 20:59:39.936854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.314 [2024-08-11 20:59:39.936886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.314 [2024-08-11 20:59:39.992407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 [2024-08-11 20:59:40.811083] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 [2024-08-11 20:59:40.823223] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 null0 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 null1 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88392 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88392 /tmp/host.sock 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 88392 ']' 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:30.251 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:30.251 20:59:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.251 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:18:30.251 [2024-08-11 20:59:40.906423] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:18:30.251 [2024-08-11 20:59:40.906664] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88392 ] 00:18:30.510 [2024-08-11 20:59:41.046567] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.510 [2024-08-11 20:59:41.111141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.510 [2024-08-11 20:59:41.167194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.447 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.448 20:59:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.448 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.707 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.708 [2024-08-11 20:59:42.288223] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.708 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.974 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:31.974 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:18:31.974 20:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:18:32.233 [2024-08-11 20:59:42.925754] bdev_nvme.c:7000:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:32.233 [2024-08-11 20:59:42.925936] bdev_nvme.c:7080:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:32.233 [2024-08-11 20:59:42.925968] bdev_nvme.c:6963:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:32.233 [2024-08-11 20:59:42.931793] bdev_nvme.c:6929:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:32.233 [2024-08-11 20:59:42.988640] bdev_nvme.c:6819:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:32.233 [2024-08-11 20:59:42.988816] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:32.800 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.059 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.059 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.059 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:33.059 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:33.059 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.059 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.060 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:33.319 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.319 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:33.319 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.319 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.319 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.320 [2024-08-11 20:59:43.889434] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:33.320 [2024-08-11 20:59:43.889735] bdev_nvme.c:6982:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:33.320 [2024-08-11 20:59:43.889762] bdev_nvme.c:6963:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:33.320 [2024-08-11 20:59:43.895786] bdev_nvme.c:6924:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.320 [2024-08-11 20:59:43.959102] bdev_nvme.c:6819:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:33.320 [2024-08-11 20:59:43.959126] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:33.320 [2024-08-11 20:59:43.959133] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:33.320 20:59:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.320 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.580 [2024-08-11 20:59:44.130098] bdev_nvme.c:6982:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:33.580 [2024-08-11 20:59:44.130128] bdev_nvme.c:6963:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:33.580 [2024-08-11 20:59:44.136105] bdev_nvme.c:6787:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:33.580 [2024-08-11 20:59:44.136135] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:33.580 [2024-08-11 20:59:44.136201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.580 [2024-08-11 20:59:44.136228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.580 [2024-08-11 20:59:44.136240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.580 [2024-08-11 20:59:44.136249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.580 [2024-08-11 20:59:44.136257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.580 [2024-08-11 20:59:44.136266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.580 [2024-08-11 20:59:44.136275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:33.580 [2024-08-11 20:59:44.136283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.580 [2024-08-11 20:59:44.136291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2210 is same with the state(6) to be set 00:18:33.580 [2024-08-11 20:59:44.136334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d2210 (9): Bad file descriptor 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.580 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.839 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:33.840 20:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 [2024-08-11 20:59:45.566468] bdev_nvme.c:7000:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:35.218 [2024-08-11 20:59:45.566841] bdev_nvme.c:7080:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:35.218 [2024-08-11 20:59:45.566877] bdev_nvme.c:6963:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:35.218 [2024-08-11 20:59:45.572502] bdev_nvme.c:6929:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:35.218 [2024-08-11 20:59:45.633113] bdev_nvme.c:6819:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:35.218 [2024-08-11 20:59:45.633349] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@646 -- # local es=0 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@649 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 request: 00:18:35.218 { 00:18:35.218 "name": "nvme", 00:18:35.218 "trtype": "tcp", 00:18:35.218 "traddr": "10.0.0.3", 00:18:35.218 "adrfam": "ipv4", 00:18:35.218 "trsvcid": "8009", 00:18:35.218 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:35.218 "wait_for_attach": true, 00:18:35.218 "method": "bdev_nvme_start_discovery", 00:18:35.218 "req_id": 1 00:18:35.218 } 00:18:35.218 Got JSON-RPC error response 00:18:35.218 response: 00:18:35.218 { 00:18:35.218 "code": -17, 00:18:35.218 "message": "File exists" 00:18:35.218 } 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@649 -- # es=1 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@646 -- # local es=0 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@649 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 request: 00:18:35.218 { 00:18:35.218 "name": "nvme_second", 00:18:35.218 "trtype": "tcp", 00:18:35.218 "traddr": "10.0.0.3", 00:18:35.218 "adrfam": "ipv4", 00:18:35.218 "trsvcid": "8009", 00:18:35.218 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:35.218 "wait_for_attach": true, 00:18:35.218 "method": "bdev_nvme_start_discovery", 00:18:35.218 "req_id": 1 00:18:35.218 } 00:18:35.218 Got JSON-RPC error response 00:18:35.218 response: 00:18:35.218 { 00:18:35.218 "code": -17, 00:18:35.218 "message": "File exists" 00:18:35.218 } 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@649 -- # es=1 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@646 -- # local es=0 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@649 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:35.218 20:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.154 [2024-08-11 20:59:46.897870] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.155 [2024-08-11 20:59:46.897947] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ef8e0 with addr=10.0.0.3, port=8010 00:18:36.155 [2024-08-11 20:59:46.897972] nvme_tcp.c:2716:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:36.155 [2024-08-11 20:59:46.897982] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:36.155 [2024-08-11 20:59:46.897992] bdev_nvme.c:7062:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:37.532 [2024-08-11 20:59:47.897875] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:37.532 [2024-08-11 20:59:47.897950] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2405620 with addr=10.0.0.3, port=8010 00:18:37.532 [2024-08-11 20:59:47.897973] nvme_tcp.c:2716:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:37.532 [2024-08-11 20:59:47.897983] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:37.532 [2024-08-11 20:59:47.897993] bdev_nvme.c:7062:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:38.468 [2024-08-11 20:59:48.897712] bdev_nvme.c:7043:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:38.468 request: 00:18:38.468 { 00:18:38.468 "name": "nvme_second", 00:18:38.468 "trtype": "tcp", 00:18:38.468 "traddr": "10.0.0.3", 00:18:38.468 "adrfam": "ipv4", 00:18:38.468 "trsvcid": "8010", 00:18:38.468 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:38.468 "wait_for_attach": false, 00:18:38.468 "attach_timeout_ms": 3000, 00:18:38.468 "method": "bdev_nvme_start_discovery", 00:18:38.468 "req_id": 1 00:18:38.468 } 00:18:38.468 Got JSON-RPC error response 00:18:38.468 response: 00:18:38.468 { 00:18:38.468 "code": -110, 00:18:38.468 "message": "Connection timed out" 00:18:38.468 } 00:18:38.468 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:18:38.468 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@649 -- # es=1 00:18:38.468 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@557 -- # xtrace_disable 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88392 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # nvmfcleanup 00:18:38.469 20:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.469 rmmod nvme_tcp 00:18:38.469 rmmod nvme_fabrics 00:18:38.469 rmmod nvme_keyring 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # '[' -n 88360 ']' 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # killprocess 88360 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 88360 ']' 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 88360 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88360 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:38.469 killing process with pid 88360 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88360' 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 88360 00:18:38.469 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 88360 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # iptr 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@783 -- # iptables-save 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@783 -- # iptables-restore 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:18:38.728 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # remove_spdk_ns 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # return 0 00:18:38.987 00:18:38.987 real 0m10.472s 00:18:38.987 user 0m19.703s 00:18:38.987 sys 0m2.115s 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.987 ************************************ 00:18:38.987 END TEST nvmf_host_discovery 00:18:38.987 ************************************ 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:38.987 20:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.988 ************************************ 00:18:38.988 START TEST nvmf_host_multipath_status 00:18:38.988 ************************************ 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:38.988 * Looking for test storage... 00:18:38.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.988 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # prepare_net_devs 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # local -g is_hw=no 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # remove_spdk_ns 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:18:39.256 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # nvmf_veth_init 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:18:39.257 Cannot find device "nvmf_init_br" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:18:39.257 Cannot find device "nvmf_init_br2" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:18:39.257 Cannot find device "nvmf_tgt_br" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.257 Cannot find device "nvmf_tgt_br2" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:18:39.257 Cannot find device "nvmf_init_br" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:18:39.257 Cannot find device "nvmf_init_br2" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:18:39.257 Cannot find device "nvmf_tgt_br" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:18:39.257 Cannot find device "nvmf_tgt_br2" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:18:39.257 Cannot find device "nvmf_br" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:18:39.257 Cannot find device "nvmf_init_if" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:18:39.257 Cannot find device "nvmf_init_if2" 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:39.257 20:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:39.257 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:39.257 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:39.257 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:39.257 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:18:39.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:39.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:18:39.545 00:18:39.545 --- 10.0.0.3 ping statistics --- 00:18:39.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.545 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:18:39.545 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:39.545 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.107 ms 00:18:39.545 00:18:39.545 --- 10.0.0.4 ping statistics --- 00:18:39.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.545 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:39.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:18:39.545 00:18:39.545 --- 10.0.0.1 ping statistics --- 00:18:39.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.545 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:39.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:18:39.545 00:18:39.545 --- 10.0.0.2 ping statistics --- 00:18:39.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.545 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@453 -- # return 0 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:39.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # nvmfpid=88893 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # waitforlisten 88893 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 88893 ']' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.545 20:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:39.545 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:18:39.545 [2024-08-11 20:59:50.278696] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:18:39.545 [2024-08-11 20:59:50.278792] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.804 [2024-08-11 20:59:50.418445] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.804 [2024-08-11 20:59:50.506813] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.804 [2024-08-11 20:59:50.507171] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.804 [2024-08-11 20:59:50.507346] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.804 [2024-08-11 20:59:50.507413] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.804 [2024-08-11 20:59:50.507450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.804 [2024-08-11 20:59:50.507780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.804 [2024-08-11 20:59:50.507792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.804 [2024-08-11 20:59:50.566301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88893 00:18:40.740 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:40.998 [2024-08-11 20:59:51.639136] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.998 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:41.256 Malloc0 00:18:41.256 20:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:41.822 20:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.822 20:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.080 [2024-08-11 20:59:52.841675] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:42.338 20:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:42.338 [2024-08-11 20:59:53.101786] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88949 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88949 /var/tmp/bdevperf.sock 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 88949 ']' 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:42.614 20:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:43.547 20:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:43.547 20:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:18:43.547 20:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:43.805 20:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:44.063 Nvme0n1 00:18:44.063 20:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:44.629 Nvme0n1 00:18:44.629 20:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:44.629 20:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.530 20:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:46.531 20:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:46.789 20:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:47.047 20:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:48.424 20:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:48.424 20:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:48.424 20:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.424 20:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:48.424 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.424 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:48.424 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.424 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:48.683 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.683 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:48.683 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.683 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:48.941 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.942 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:48.942 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.942 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:49.200 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.200 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:49.200 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.200 20:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:49.459 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.459 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:49.459 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:49.459 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.718 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.718 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:49.718 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:50.285 21:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:50.544 21:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:51.479 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:51.479 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:51.479 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.479 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:51.738 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.738 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:51.738 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.738 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:51.995 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.995 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:51.995 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.995 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:52.253 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.253 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:52.253 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:52.253 21:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.511 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.512 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:52.512 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.512 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:52.769 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.769 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:52.769 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:52.769 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.358 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.358 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:53.358 21:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:53.358 21:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:53.617 21:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.994 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:55.253 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:55.253 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:55.253 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.253 21:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:55.512 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.512 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:55.512 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.512 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:55.772 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.772 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:55.772 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:55.772 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.030 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.030 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:56.030 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:56.030 21:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.598 21:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.598 21:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:56.598 21:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:56.598 21:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:56.857 21:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.234 21:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:58.493 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:58.493 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:58.493 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.493 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:58.752 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.752 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:59.010 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.010 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:59.269 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.269 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:59.269 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.269 21:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:59.527 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.528 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:59.528 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:59.528 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.786 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.786 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:59.786 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:00.045 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:00.303 21:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:01.239 21:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:01.239 21:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:01.239 21:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.239 21:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.498 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.498 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:01.498 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.498 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.756 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.756 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.756 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:01.757 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.015 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.015 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:02.015 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.015 21:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:02.273 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.273 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:02.273 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.273 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:02.841 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:02.841 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:02.841 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.841 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:03.100 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.100 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:03.100 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:03.358 21:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:03.617 21:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:04.552 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:04.552 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:04.552 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.552 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.810 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.810 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:04.810 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.810 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:05.069 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.069 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:05.069 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.069 21:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:05.327 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.327 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:05.327 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.327 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.586 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.586 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:05.586 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.586 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.844 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.844 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:05.844 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.844 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.103 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.103 21:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:06.456 21:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:06.456 21:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:06.714 21:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:06.972 21:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:08.345 21:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:08.345 21:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:08.345 21:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.346 21:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:08.346 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.346 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:08.346 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:08.346 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.604 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.604 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:08.604 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.604 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:08.862 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.862 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:08.862 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.862 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:09.121 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.121 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:09.121 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.121 21:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:09.686 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.686 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:09.686 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:09.686 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.945 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.945 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:09.945 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:10.203 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:10.461 21:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:11.397 21:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:11.397 21:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:11.397 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.397 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:11.656 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:11.656 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:11.656 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.656 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:11.913 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.914 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:11.914 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.914 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:12.172 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.172 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:12.172 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:12.172 21:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.430 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.430 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:12.430 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.430 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:12.690 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.690 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:12.690 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:12.690 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.948 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.948 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:12.948 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:13.206 21:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:13.773 21:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:14.708 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:14.708 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:14.708 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.708 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:14.967 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.967 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:14.967 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:14.967 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.225 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.225 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:15.225 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.225 21:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:15.483 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.483 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:15.483 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.483 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:15.742 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:15.742 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:15.742 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:15.742 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:16.000 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.000 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:16.000 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:16.000 21:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.259 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.259 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:16.259 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:16.518 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:16.776 21:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.151 21:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:18.409 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:18.409 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:18.409 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.409 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:18.667 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.667 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:18.668 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.668 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:18.926 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:18.926 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:18.926 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:18.926 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:19.184 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.184 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:19.184 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:19.184 21:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88949 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 88949 ']' 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 88949 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88949 00:19:19.755 killing process with pid 88949 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88949' 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 88949 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 88949 00:19:19.755 Connection closed with partial response: 00:19:19.755 00:19:19.755 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88949 00:19:19.755 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:19.755 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:19:19.755 [2024-08-11 20:59:53.176928] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:19:19.755 [2024-08-11 20:59:53.177051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88949 ] 00:19:19.755 [2024-08-11 20:59:53.314208] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.755 [2024-08-11 20:59:53.397264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.755 [2024-08-11 20:59:53.448931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.755 Running I/O for 90 seconds... 00:19:19.755 [2024-08-11 21:00:10.569608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.569984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.569999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.756 [2024-08-11 21:00:10.570932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.570961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.570979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.756 [2024-08-11 21:00:10.571244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:19.756 [2024-08-11 21:00:10.571267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.571981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.571997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.572033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.572069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.572105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.757 [2024-08-11 21:00:10.572142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.757 [2024-08-11 21:00:10.572790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:19.757 [2024-08-11 21:00:10.572812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.572828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.572848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.572863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.572883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.572898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.572918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.572941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.572964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.572980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.573689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.573939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.573954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.574778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.758 [2024-08-11 21:00:10.574829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.574866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.574883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.574911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.574956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.574973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.575000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.575016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.575044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.575060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.575098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.575114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.575142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.575159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.575217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.758 [2024-08-11 21:00:10.575238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:19.758 [2024-08-11 21:00:10.575279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:10.575580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:10.575595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.513376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.513458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.513514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.513534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.513557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.513571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.513606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.513625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.514810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.514874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.514903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.514920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.514940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.514954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.514974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.514988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.759 [2024-08-11 21:00:27.515805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:19.759 [2024-08-11 21:00:27.515862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.759 [2024-08-11 21:00:27.515877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:19.760 Received shutdown signal, test time was about 34.992250 seconds 00:19:19.760 00:19:19.760 Latency(us) 00:19:19.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.760 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.760 Verification LBA range: start 0x0 length 0x4000 00:19:19.760 Nvme0n1 : 34.99 9520.31 37.19 0.00 0.00 13416.39 781.96 4026531.84 00:19:19.760 =================================================================================================================== 00:19:19.760 Total : 9520.31 37.19 0.00 0.00 13416.39 781.96 4026531.84 00:19:19.760 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # nvmfcleanup 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.051 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.051 rmmod nvme_tcp 00:19:20.310 rmmod nvme_fabrics 00:19:20.310 rmmod nvme_keyring 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # '[' -n 88893 ']' 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # killprocess 88893 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 88893 ']' 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 88893 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88893 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:20.310 killing process with pid 88893 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88893' 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 88893 00:19:20.310 21:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 88893 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # iptr 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@783 -- # iptables-save 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@783 -- # iptables-restore 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # remove_spdk_ns 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.568 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # return 0 00:19:20.827 00:19:20.827 real 0m41.714s 00:19:20.827 user 2m14.790s 00:19:20.827 sys 0m12.114s 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:20.827 ************************************ 00:19:20.827 END TEST nvmf_host_multipath_status 00:19:20.827 ************************************ 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.827 ************************************ 00:19:20.827 START TEST nvmf_discovery_remove_ifc 00:19:20.827 ************************************ 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:20.827 * Looking for test storage... 00:19:20.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.827 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # prepare_net_devs 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # local -g is_hw=no 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # remove_spdk_ns 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # nvmf_veth_init 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:19:20.828 Cannot find device "nvmf_init_br" 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:19:20.828 Cannot find device "nvmf_init_br2" 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:19:20.828 Cannot find device "nvmf_tgt_br" 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # true 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.828 Cannot find device "nvmf_tgt_br2" 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # true 00:19:20.828 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:19:21.087 Cannot find device "nvmf_init_br" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:19:21.087 Cannot find device "nvmf_init_br2" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:19:21.087 Cannot find device "nvmf_tgt_br" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:19:21.087 Cannot find device "nvmf_tgt_br2" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:19:21.087 Cannot find device "nvmf_br" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:19:21.087 Cannot find device "nvmf_init_if" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:19:21.087 Cannot find device "nvmf_init_if2" 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:19:21.087 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:19:21.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:21.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:19:21.346 00:19:21.346 --- 10.0.0.3 ping statistics --- 00:19:21.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.346 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:19:21.346 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:21.346 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:19:21.346 00:19:21.346 --- 10.0.0.4 ping statistics --- 00:19:21.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.346 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:21.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:19:21.346 00:19:21.346 --- 10.0.0.1 ping statistics --- 00:19:21.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.346 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:21.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:19:21.346 00:19:21.346 --- 10.0.0.2 ping statistics --- 00:19:21.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.346 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@453 -- # return 0 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@501 -- # nvmfpid=89801 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # waitforlisten 89801 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 89801 ']' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.346 21:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.346 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:19:21.346 [2024-08-11 21:00:32.024404] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:19:21.346 [2024-08-11 21:00:32.024497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.604 [2024-08-11 21:00:32.164382] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.604 [2024-08-11 21:00:32.249231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.604 [2024-08-11 21:00:32.249299] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.604 [2024-08-11 21:00:32.249314] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.604 [2024-08-11 21:00:32.249325] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.604 [2024-08-11 21:00:32.249334] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.604 [2024-08-11 21:00:32.249376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.604 [2024-08-11 21:00:32.305487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:21.604 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:21.604 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:19:21.604 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:19:21.604 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.604 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.863 [2024-08-11 21:00:32.425826] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.863 [2024-08-11 21:00:32.433961] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:21.863 null0 00:19:21.863 [2024-08-11 21:00:32.465853] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=89824 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 89824 /tmp/host.sock 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 89824 ']' 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:21.863 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:21.863 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:21.863 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:19:21.863 [2024-08-11 21:00:32.545157] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:19:21.863 [2024-08-11 21:00:32.545246] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89824 ] 00:19:22.122 [2024-08-11 21:00:32.685890] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.122 [2024-08-11 21:00:32.780734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:22.122 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.122 [2024-08-11 21:00:32.885349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:22.380 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:22.380 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:22.380 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:22.380 21:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.316 [2024-08-11 21:00:33.938736] bdev_nvme.c:7000:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:23.316 [2024-08-11 21:00:33.938780] bdev_nvme.c:7080:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:23.316 [2024-08-11 21:00:33.938798] bdev_nvme.c:6963:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:23.316 [2024-08-11 21:00:33.944776] bdev_nvme.c:6929:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:23.316 [2024-08-11 21:00:34.001840] bdev_nvme.c:7790:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:23.316 [2024-08-11 21:00:34.001916] bdev_nvme.c:7790:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:23.316 [2024-08-11 21:00:34.001945] bdev_nvme.c:7790:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:23.316 [2024-08-11 21:00:34.001962] bdev_nvme.c:6819:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:23.316 [2024-08-11 21:00:34.001987] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:23.316 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:23.316 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:23.316 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:23.317 [2024-08-11 21:00:34.007344] bdev_nvme.c:1610:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1949fa0 was disconnected and freed. delete nvme_qpair. 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.317 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:23.575 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:23.575 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:23.575 21:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:24.511 21:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.446 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:25.704 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:25.704 21:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:26.638 21:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.571 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:27.829 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:27.829 21:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:28.763 [2024-08-11 21:00:39.429792] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:28.763 [2024-08-11 21:00:39.429855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.763 [2024-08-11 21:00:39.429870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.763 [2024-08-11 21:00:39.429883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.763 [2024-08-11 21:00:39.429891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.763 [2024-08-11 21:00:39.429900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.763 [2024-08-11 21:00:39.429909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.763 [2024-08-11 21:00:39.429918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.763 [2024-08-11 21:00:39.429926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.763 [2024-08-11 21:00:39.429936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.763 [2024-08-11 21:00:39.429960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.763 [2024-08-11 21:00:39.429968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e350 is same with the state(6) to be set 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:28.763 21:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.763 [2024-08-11 21:00:39.439788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190e350 (9): Bad file descriptor 00:19:28.763 [2024-08-11 21:00:39.449813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.698 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.956 [2024-08-11 21:00:40.499682] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:29.956 [2024-08-11 21:00:40.499755] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x190e350 with addr=10.0.0.3, port=4420 00:19:29.956 [2024-08-11 21:00:40.499778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190e350 is same with the state(6) to be set 00:19:29.956 [2024-08-11 21:00:40.499823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190e350 (9): Bad file descriptor 00:19:29.956 [2024-08-11 21:00:40.500436] bdev_nvme.c:2888:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:29.956 [2024-08-11 21:00:40.500493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:29.956 [2024-08-11 21:00:40.500512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:29.956 [2024-08-11 21:00:40.500530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:29.956 [2024-08-11 21:00:40.500565] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.956 [2024-08-11 21:00:40.500584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:29.956 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:29.956 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:29.956 21:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.891 [2024-08-11 21:00:41.500674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:30.892 [2024-08-11 21:00:41.500704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:30.892 [2024-08-11 21:00:41.500714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:30.892 [2024-08-11 21:00:41.500722] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:19:30.892 [2024-08-11 21:00:41.500735] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:30.892 [2024-08-11 21:00:41.500759] bdev_nvme.c:6751:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:30.892 [2024-08-11 21:00:41.500788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.892 [2024-08-11 21:00:41.500801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.892 [2024-08-11 21:00:41.500813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.892 [2024-08-11 21:00:41.500821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.892 [2024-08-11 21:00:41.500829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.892 [2024-08-11 21:00:41.500837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.892 [2024-08-11 21:00:41.500845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.892 [2024-08-11 21:00:41.500853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.892 [2024-08-11 21:00:41.500861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.892 [2024-08-11 21:00:41.500868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.892 [2024-08-11 21:00:41.500876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:30.892 [2024-08-11 21:00:41.501632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x190d900 (9): Bad file descriptor 00:19:30.892 [2024-08-11 21:00:41.502649] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:30.892 [2024-08-11 21:00:41.502670] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:30.892 21:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:32.267 21:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:32.833 [2024-08-11 21:00:43.512364] bdev_nvme.c:7000:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:32.833 [2024-08-11 21:00:43.512541] bdev_nvme.c:7080:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:32.833 [2024-08-11 21:00:43.512573] bdev_nvme.c:6963:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:32.833 [2024-08-11 21:00:43.518403] bdev_nvme.c:6929:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:32.833 [2024-08-11 21:00:43.574607] bdev_nvme.c:7790:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:32.833 [2024-08-11 21:00:43.574811] bdev_nvme.c:7790:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:32.833 [2024-08-11 21:00:43.574875] bdev_nvme.c:7790:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:32.833 [2024-08-11 21:00:43.574979] bdev_nvme.c:6819:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:32.833 [2024-08-11 21:00:43.575038] bdev_nvme.c:6778:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:32.833 [2024-08-11 21:00:43.581107] bdev_nvme.c:1610:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1919990 was disconnected and freed. delete nvme_qpair. 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 89824 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 89824 ']' 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 89824 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89824 00:19:33.092 killing process with pid 89824 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89824' 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 89824 00:19:33.092 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 89824 00:19:33.355 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:33.355 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # nvmfcleanup 00:19:33.355 21:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.355 rmmod nvme_tcp 00:19:33.355 rmmod nvme_fabrics 00:19:33.355 rmmod nvme_keyring 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # '[' -n 89801 ']' 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # killprocess 89801 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 89801 ']' 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 89801 00:19:33.355 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:19:33.624 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89801 00:19:33.625 killing process with pid 89801 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89801' 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 89801 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 89801 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # iptr 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@783 -- # iptables-restore 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@783 -- # iptables-save 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:19:33.625 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # remove_spdk_ns 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.883 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # return 0 00:19:33.883 00:19:33.883 real 0m13.159s 00:19:33.883 user 0m22.366s 00:19:33.884 sys 0m2.462s 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.884 ************************************ 00:19:33.884 END TEST nvmf_discovery_remove_ifc 00:19:33.884 ************************************ 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.884 ************************************ 00:19:33.884 START TEST nvmf_identify_kernel_target 00:19:33.884 ************************************ 00:19:33.884 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:34.143 * Looking for test storage... 00:19:34.143 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # prepare_net_devs 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # local -g is_hw=no 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # remove_spdk_ns 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # nvmf_veth_init 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:34.143 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:19:34.144 Cannot find device "nvmf_init_br" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:19:34.144 Cannot find device "nvmf_init_br2" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:19:34.144 Cannot find device "nvmf_tgt_br" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.144 Cannot find device "nvmf_tgt_br2" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:19:34.144 Cannot find device "nvmf_init_br" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:19:34.144 Cannot find device "nvmf_init_br2" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:19:34.144 Cannot find device "nvmf_tgt_br" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:19:34.144 Cannot find device "nvmf_tgt_br2" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:19:34.144 Cannot find device "nvmf_br" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:19:34.144 Cannot find device "nvmf_init_if" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:19:34.144 Cannot find device "nvmf_init_if2" 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.144 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:34.144 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:19:34.402 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:34.402 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:34.402 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:34.402 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:34.402 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:34.402 21:00:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:19:34.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:34.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:19:34.403 00:19:34.403 --- 10.0.0.3 ping statistics --- 00:19:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.403 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:19:34.403 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:34.403 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:19:34.403 00:19:34.403 --- 10.0.0.4 ping statistics --- 00:19:34.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.403 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:19:34.403 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:34.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:19:34.662 00:19:34.662 --- 10.0.0.1 ping statistics --- 00:19:34.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.662 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:34.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:19:34.662 00:19:34.662 --- 10.0.0.2 ping statistics --- 00:19:34.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.662 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # return 0 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@761 -- # local ip 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # nvmet=/sys/kernel/config/nvmet 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@655 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # local block nvme 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # [[ ! -e /sys/module/nvmet ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # modprobe nvmet 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:34.662 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:34.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:34.920 Waiting for block devices as requested 00:19:34.920 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:35.178 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # is_block_zoned nvme0n1 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # block_in_use nvme0n1 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:35.178 No valid GPT data, bailing 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n1 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # is_block_zoned nvme0n2 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # block_in_use nvme0n2 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:35.178 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:35.437 No valid GPT data, bailing 00:19:35.437 21:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n2 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # is_block_zoned nvme0n3 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # block_in_use nvme0n3 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:35.437 No valid GPT data, bailing 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n3 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # is_block_zoned nvme1n1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # block_in_use nvme1n1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:35.437 No valid GPT data, bailing 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # nvme=/dev/nvme1n1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # [[ -b /dev/nvme1n1 ]] 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # echo 1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # echo /dev/nvme1n1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo 1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 10.0.0.1 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo tcp 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 4420 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo ipv4 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:35.437 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -a 10.0.0.1 -t tcp -s 4420 00:19:35.696 00:19:35.696 Discovery Log Number of Records 2, Generation counter 2 00:19:35.696 =====Discovery Log Entry 0====== 00:19:35.696 trtype: tcp 00:19:35.696 adrfam: ipv4 00:19:35.696 subtype: current discovery subsystem 00:19:35.696 treq: not specified, sq flow control disable supported 00:19:35.696 portid: 1 00:19:35.696 trsvcid: 4420 00:19:35.696 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:35.696 traddr: 10.0.0.1 00:19:35.696 eflags: none 00:19:35.696 sectype: none 00:19:35.696 =====Discovery Log Entry 1====== 00:19:35.696 trtype: tcp 00:19:35.696 adrfam: ipv4 00:19:35.696 subtype: nvme subsystem 00:19:35.696 treq: not specified, sq flow control disable supported 00:19:35.696 portid: 1 00:19:35.696 trsvcid: 4420 00:19:35.696 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:35.696 traddr: 10.0.0.1 00:19:35.696 eflags: none 00:19:35.696 sectype: none 00:19:35.696 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:35.696 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:35.696 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:19:35.696 ===================================================== 00:19:35.696 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:35.696 ===================================================== 00:19:35.696 Controller Capabilities/Features 00:19:35.696 ================================ 00:19:35.696 Vendor ID: 0000 00:19:35.696 Subsystem Vendor ID: 0000 00:19:35.696 Serial Number: 2bc4020abb4c1722f85a 00:19:35.696 Model Number: Linux 00:19:35.696 Firmware Version: 6.8.9-20 00:19:35.696 Recommended Arb Burst: 0 00:19:35.696 IEEE OUI Identifier: 00 00 00 00:19:35.696 Multi-path I/O 00:19:35.696 May have multiple subsystem ports: No 00:19:35.696 May have multiple controllers: No 00:19:35.696 Associated with SR-IOV VF: No 00:19:35.696 Max Data Transfer Size: Unlimited 00:19:35.696 Max Number of Namespaces: 0 00:19:35.696 Max Number of I/O Queues: 1024 00:19:35.696 NVMe Specification Version (VS): 1.3 00:19:35.696 NVMe Specification Version (Identify): 1.3 00:19:35.696 Maximum Queue Entries: 1024 00:19:35.696 Contiguous Queues Required: No 00:19:35.696 Arbitration Mechanisms Supported 00:19:35.696 Weighted Round Robin: Not Supported 00:19:35.696 Vendor Specific: Not Supported 00:19:35.696 Reset Timeout: 7500 ms 00:19:35.696 Doorbell Stride: 4 bytes 00:19:35.696 NVM Subsystem Reset: Not Supported 00:19:35.696 Command Sets Supported 00:19:35.696 NVM Command Set: Supported 00:19:35.696 Boot Partition: Not Supported 00:19:35.696 Memory Page Size Minimum: 4096 bytes 00:19:35.696 Memory Page Size Maximum: 4096 bytes 00:19:35.696 Persistent Memory Region: Not Supported 00:19:35.696 Optional Asynchronous Events Supported 00:19:35.696 Namespace Attribute Notices: Not Supported 00:19:35.696 Firmware Activation Notices: Not Supported 00:19:35.696 ANA Change Notices: Not Supported 00:19:35.696 PLE Aggregate Log Change Notices: Not Supported 00:19:35.696 LBA Status Info Alert Notices: Not Supported 00:19:35.696 EGE Aggregate Log Change Notices: Not Supported 00:19:35.696 Normal NVM Subsystem Shutdown event: Not Supported 00:19:35.696 Zone Descriptor Change Notices: Not Supported 00:19:35.696 Discovery Log Change Notices: Supported 00:19:35.696 Controller Attributes 00:19:35.696 128-bit Host Identifier: Not Supported 00:19:35.696 Non-Operational Permissive Mode: Not Supported 00:19:35.696 NVM Sets: Not Supported 00:19:35.696 Read Recovery Levels: Not Supported 00:19:35.696 Endurance Groups: Not Supported 00:19:35.696 Predictable Latency Mode: Not Supported 00:19:35.696 Traffic Based Keep ALive: Not Supported 00:19:35.696 Namespace Granularity: Not Supported 00:19:35.696 SQ Associations: Not Supported 00:19:35.696 UUID List: Not Supported 00:19:35.696 Multi-Domain Subsystem: Not Supported 00:19:35.696 Fixed Capacity Management: Not Supported 00:19:35.696 Variable Capacity Management: Not Supported 00:19:35.696 Delete Endurance Group: Not Supported 00:19:35.696 Delete NVM Set: Not Supported 00:19:35.696 Extended LBA Formats Supported: Not Supported 00:19:35.696 Flexible Data Placement Supported: Not Supported 00:19:35.696 00:19:35.696 Controller Memory Buffer Support 00:19:35.696 ================================ 00:19:35.696 Supported: No 00:19:35.696 00:19:35.696 Persistent Memory Region Support 00:19:35.696 ================================ 00:19:35.696 Supported: No 00:19:35.696 00:19:35.696 Admin Command Set Attributes 00:19:35.696 ============================ 00:19:35.696 Security Send/Receive: Not Supported 00:19:35.696 Format NVM: Not Supported 00:19:35.696 Firmware Activate/Download: Not Supported 00:19:35.696 Namespace Management: Not Supported 00:19:35.696 Device Self-Test: Not Supported 00:19:35.696 Directives: Not Supported 00:19:35.696 NVMe-MI: Not Supported 00:19:35.696 Virtualization Management: Not Supported 00:19:35.696 Doorbell Buffer Config: Not Supported 00:19:35.696 Get LBA Status Capability: Not Supported 00:19:35.696 Command & Feature Lockdown Capability: Not Supported 00:19:35.696 Abort Command Limit: 1 00:19:35.696 Async Event Request Limit: 1 00:19:35.696 Number of Firmware Slots: N/A 00:19:35.696 Firmware Slot 1 Read-Only: N/A 00:19:35.696 Firmware Activation Without Reset: N/A 00:19:35.696 Multiple Update Detection Support: N/A 00:19:35.696 Firmware Update Granularity: No Information Provided 00:19:35.696 Per-Namespace SMART Log: No 00:19:35.696 Asymmetric Namespace Access Log Page: Not Supported 00:19:35.696 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:35.696 Command Effects Log Page: Not Supported 00:19:35.696 Get Log Page Extended Data: Supported 00:19:35.696 Telemetry Log Pages: Not Supported 00:19:35.696 Persistent Event Log Pages: Not Supported 00:19:35.696 Supported Log Pages Log Page: May Support 00:19:35.696 Commands Supported & Effects Log Page: Not Supported 00:19:35.696 Feature Identifiers & Effects Log Page:May Support 00:19:35.696 NVMe-MI Commands & Effects Log Page: May Support 00:19:35.696 Data Area 4 for Telemetry Log: Not Supported 00:19:35.696 Error Log Page Entries Supported: 1 00:19:35.696 Keep Alive: Not Supported 00:19:35.696 00:19:35.696 NVM Command Set Attributes 00:19:35.696 ========================== 00:19:35.696 Submission Queue Entry Size 00:19:35.696 Max: 1 00:19:35.696 Min: 1 00:19:35.696 Completion Queue Entry Size 00:19:35.696 Max: 1 00:19:35.696 Min: 1 00:19:35.696 Number of Namespaces: 0 00:19:35.696 Compare Command: Not Supported 00:19:35.697 Write Uncorrectable Command: Not Supported 00:19:35.697 Dataset Management Command: Not Supported 00:19:35.697 Write Zeroes Command: Not Supported 00:19:35.697 Set Features Save Field: Not Supported 00:19:35.697 Reservations: Not Supported 00:19:35.697 Timestamp: Not Supported 00:19:35.697 Copy: Not Supported 00:19:35.697 Volatile Write Cache: Not Present 00:19:35.697 Atomic Write Unit (Normal): 1 00:19:35.697 Atomic Write Unit (PFail): 1 00:19:35.697 Atomic Compare & Write Unit: 1 00:19:35.697 Fused Compare & Write: Not Supported 00:19:35.697 Scatter-Gather List 00:19:35.697 SGL Command Set: Supported 00:19:35.697 SGL Keyed: Not Supported 00:19:35.697 SGL Bit Bucket Descriptor: Not Supported 00:19:35.697 SGL Metadata Pointer: Not Supported 00:19:35.697 Oversized SGL: Not Supported 00:19:35.697 SGL Metadata Address: Not Supported 00:19:35.697 SGL Offset: Supported 00:19:35.697 Transport SGL Data Block: Not Supported 00:19:35.697 Replay Protected Memory Block: Not Supported 00:19:35.697 00:19:35.697 Firmware Slot Information 00:19:35.697 ========================= 00:19:35.697 Active slot: 0 00:19:35.697 00:19:35.697 00:19:35.697 Error Log 00:19:35.697 ========= 00:19:35.697 00:19:35.697 Active Namespaces 00:19:35.697 ================= 00:19:35.697 Discovery Log Page 00:19:35.697 ================== 00:19:35.697 Generation Counter: 2 00:19:35.697 Number of Records: 2 00:19:35.697 Record Format: 0 00:19:35.697 00:19:35.697 Discovery Log Entry 0 00:19:35.697 ---------------------- 00:19:35.697 Transport Type: 3 (TCP) 00:19:35.697 Address Family: 1 (IPv4) 00:19:35.697 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:35.697 Entry Flags: 00:19:35.697 Duplicate Returned Information: 0 00:19:35.697 Explicit Persistent Connection Support for Discovery: 0 00:19:35.697 Transport Requirements: 00:19:35.697 Secure Channel: Not Specified 00:19:35.697 Port ID: 1 (0x0001) 00:19:35.697 Controller ID: 65535 (0xffff) 00:19:35.697 Admin Max SQ Size: 32 00:19:35.697 Transport Service Identifier: 4420 00:19:35.697 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:35.697 Transport Address: 10.0.0.1 00:19:35.697 Discovery Log Entry 1 00:19:35.697 ---------------------- 00:19:35.697 Transport Type: 3 (TCP) 00:19:35.697 Address Family: 1 (IPv4) 00:19:35.697 Subsystem Type: 2 (NVM Subsystem) 00:19:35.697 Entry Flags: 00:19:35.697 Duplicate Returned Information: 0 00:19:35.697 Explicit Persistent Connection Support for Discovery: 0 00:19:35.697 Transport Requirements: 00:19:35.697 Secure Channel: Not Specified 00:19:35.697 Port ID: 1 (0x0001) 00:19:35.697 Controller ID: 65535 (0xffff) 00:19:35.697 Admin Max SQ Size: 32 00:19:35.697 Transport Service Identifier: 4420 00:19:35.697 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:35.697 Transport Address: 10.0.0.1 00:19:35.697 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:35.697 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:19:35.956 get_feature(0x01) failed 00:19:35.956 get_feature(0x02) failed 00:19:35.956 get_feature(0x04) failed 00:19:35.956 ===================================================== 00:19:35.956 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:35.956 ===================================================== 00:19:35.956 Controller Capabilities/Features 00:19:35.956 ================================ 00:19:35.956 Vendor ID: 0000 00:19:35.956 Subsystem Vendor ID: 0000 00:19:35.956 Serial Number: ec93d6401d974cc57764 00:19:35.956 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:35.956 Firmware Version: 6.8.9-20 00:19:35.956 Recommended Arb Burst: 6 00:19:35.956 IEEE OUI Identifier: 00 00 00 00:19:35.956 Multi-path I/O 00:19:35.956 May have multiple subsystem ports: Yes 00:19:35.956 May have multiple controllers: Yes 00:19:35.956 Associated with SR-IOV VF: No 00:19:35.956 Max Data Transfer Size: Unlimited 00:19:35.956 Max Number of Namespaces: 1024 00:19:35.956 Max Number of I/O Queues: 128 00:19:35.956 NVMe Specification Version (VS): 1.3 00:19:35.956 NVMe Specification Version (Identify): 1.3 00:19:35.956 Maximum Queue Entries: 1024 00:19:35.956 Contiguous Queues Required: No 00:19:35.956 Arbitration Mechanisms Supported 00:19:35.956 Weighted Round Robin: Not Supported 00:19:35.956 Vendor Specific: Not Supported 00:19:35.956 Reset Timeout: 7500 ms 00:19:35.956 Doorbell Stride: 4 bytes 00:19:35.956 NVM Subsystem Reset: Not Supported 00:19:35.956 Command Sets Supported 00:19:35.956 NVM Command Set: Supported 00:19:35.956 Boot Partition: Not Supported 00:19:35.956 Memory Page Size Minimum: 4096 bytes 00:19:35.956 Memory Page Size Maximum: 4096 bytes 00:19:35.956 Persistent Memory Region: Not Supported 00:19:35.956 Optional Asynchronous Events Supported 00:19:35.956 Namespace Attribute Notices: Supported 00:19:35.956 Firmware Activation Notices: Not Supported 00:19:35.956 ANA Change Notices: Supported 00:19:35.956 PLE Aggregate Log Change Notices: Not Supported 00:19:35.956 LBA Status Info Alert Notices: Not Supported 00:19:35.956 EGE Aggregate Log Change Notices: Not Supported 00:19:35.956 Normal NVM Subsystem Shutdown event: Not Supported 00:19:35.956 Zone Descriptor Change Notices: Not Supported 00:19:35.956 Discovery Log Change Notices: Not Supported 00:19:35.956 Controller Attributes 00:19:35.956 128-bit Host Identifier: Supported 00:19:35.956 Non-Operational Permissive Mode: Not Supported 00:19:35.956 NVM Sets: Not Supported 00:19:35.956 Read Recovery Levels: Not Supported 00:19:35.956 Endurance Groups: Not Supported 00:19:35.956 Predictable Latency Mode: Not Supported 00:19:35.956 Traffic Based Keep ALive: Supported 00:19:35.956 Namespace Granularity: Not Supported 00:19:35.956 SQ Associations: Not Supported 00:19:35.956 UUID List: Not Supported 00:19:35.956 Multi-Domain Subsystem: Not Supported 00:19:35.956 Fixed Capacity Management: Not Supported 00:19:35.956 Variable Capacity Management: Not Supported 00:19:35.956 Delete Endurance Group: Not Supported 00:19:35.956 Delete NVM Set: Not Supported 00:19:35.956 Extended LBA Formats Supported: Not Supported 00:19:35.956 Flexible Data Placement Supported: Not Supported 00:19:35.956 00:19:35.956 Controller Memory Buffer Support 00:19:35.956 ================================ 00:19:35.956 Supported: No 00:19:35.956 00:19:35.956 Persistent Memory Region Support 00:19:35.956 ================================ 00:19:35.956 Supported: No 00:19:35.956 00:19:35.956 Admin Command Set Attributes 00:19:35.956 ============================ 00:19:35.956 Security Send/Receive: Not Supported 00:19:35.956 Format NVM: Not Supported 00:19:35.956 Firmware Activate/Download: Not Supported 00:19:35.956 Namespace Management: Not Supported 00:19:35.956 Device Self-Test: Not Supported 00:19:35.956 Directives: Not Supported 00:19:35.956 NVMe-MI: Not Supported 00:19:35.956 Virtualization Management: Not Supported 00:19:35.956 Doorbell Buffer Config: Not Supported 00:19:35.956 Get LBA Status Capability: Not Supported 00:19:35.956 Command & Feature Lockdown Capability: Not Supported 00:19:35.956 Abort Command Limit: 4 00:19:35.956 Async Event Request Limit: 4 00:19:35.956 Number of Firmware Slots: N/A 00:19:35.956 Firmware Slot 1 Read-Only: N/A 00:19:35.956 Firmware Activation Without Reset: N/A 00:19:35.956 Multiple Update Detection Support: N/A 00:19:35.956 Firmware Update Granularity: No Information Provided 00:19:35.956 Per-Namespace SMART Log: Yes 00:19:35.956 Asymmetric Namespace Access Log Page: Supported 00:19:35.956 ANA Transition Time : 10 sec 00:19:35.956 00:19:35.956 Asymmetric Namespace Access Capabilities 00:19:35.956 ANA Optimized State : Supported 00:19:35.956 ANA Non-Optimized State : Supported 00:19:35.956 ANA Inaccessible State : Supported 00:19:35.956 ANA Persistent Loss State : Supported 00:19:35.956 ANA Change State : Supported 00:19:35.956 ANAGRPID is not changed : No 00:19:35.956 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:35.956 00:19:35.956 ANA Group Identifier Maximum : 128 00:19:35.956 Number of ANA Group Identifiers : 128 00:19:35.956 Max Number of Allowed Namespaces : 1024 00:19:35.956 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:35.956 Command Effects Log Page: Supported 00:19:35.956 Get Log Page Extended Data: Supported 00:19:35.956 Telemetry Log Pages: Not Supported 00:19:35.956 Persistent Event Log Pages: Not Supported 00:19:35.957 Supported Log Pages Log Page: May Support 00:19:35.957 Commands Supported & Effects Log Page: Not Supported 00:19:35.957 Feature Identifiers & Effects Log Page:May Support 00:19:35.957 NVMe-MI Commands & Effects Log Page: May Support 00:19:35.957 Data Area 4 for Telemetry Log: Not Supported 00:19:35.957 Error Log Page Entries Supported: 128 00:19:35.957 Keep Alive: Supported 00:19:35.957 Keep Alive Granularity: 1000 ms 00:19:35.957 00:19:35.957 NVM Command Set Attributes 00:19:35.957 ========================== 00:19:35.957 Submission Queue Entry Size 00:19:35.957 Max: 64 00:19:35.957 Min: 64 00:19:35.957 Completion Queue Entry Size 00:19:35.957 Max: 16 00:19:35.957 Min: 16 00:19:35.957 Number of Namespaces: 1024 00:19:35.957 Compare Command: Not Supported 00:19:35.957 Write Uncorrectable Command: Not Supported 00:19:35.957 Dataset Management Command: Supported 00:19:35.957 Write Zeroes Command: Supported 00:19:35.957 Set Features Save Field: Not Supported 00:19:35.957 Reservations: Not Supported 00:19:35.957 Timestamp: Not Supported 00:19:35.957 Copy: Not Supported 00:19:35.957 Volatile Write Cache: Present 00:19:35.957 Atomic Write Unit (Normal): 1 00:19:35.957 Atomic Write Unit (PFail): 1 00:19:35.957 Atomic Compare & Write Unit: 1 00:19:35.957 Fused Compare & Write: Not Supported 00:19:35.957 Scatter-Gather List 00:19:35.957 SGL Command Set: Supported 00:19:35.957 SGL Keyed: Not Supported 00:19:35.957 SGL Bit Bucket Descriptor: Not Supported 00:19:35.957 SGL Metadata Pointer: Not Supported 00:19:35.957 Oversized SGL: Not Supported 00:19:35.957 SGL Metadata Address: Not Supported 00:19:35.957 SGL Offset: Supported 00:19:35.957 Transport SGL Data Block: Not Supported 00:19:35.957 Replay Protected Memory Block: Not Supported 00:19:35.957 00:19:35.957 Firmware Slot Information 00:19:35.957 ========================= 00:19:35.957 Active slot: 0 00:19:35.957 00:19:35.957 Asymmetric Namespace Access 00:19:35.957 =========================== 00:19:35.957 Change Count : 0 00:19:35.957 Number of ANA Group Descriptors : 1 00:19:35.957 ANA Group Descriptor : 0 00:19:35.957 ANA Group ID : 1 00:19:35.957 Number of NSID Values : 1 00:19:35.957 Change Count : 0 00:19:35.957 ANA State : 1 00:19:35.957 Namespace Identifier : 1 00:19:35.957 00:19:35.957 Commands Supported and Effects 00:19:35.957 ============================== 00:19:35.957 Admin Commands 00:19:35.957 -------------- 00:19:35.957 Get Log Page (02h): Supported 00:19:35.957 Identify (06h): Supported 00:19:35.957 Abort (08h): Supported 00:19:35.957 Set Features (09h): Supported 00:19:35.957 Get Features (0Ah): Supported 00:19:35.957 Asynchronous Event Request (0Ch): Supported 00:19:35.957 Keep Alive (18h): Supported 00:19:35.957 I/O Commands 00:19:35.957 ------------ 00:19:35.957 Flush (00h): Supported 00:19:35.957 Write (01h): Supported LBA-Change 00:19:35.957 Read (02h): Supported 00:19:35.957 Write Zeroes (08h): Supported LBA-Change 00:19:35.957 Dataset Management (09h): Supported 00:19:35.957 00:19:35.957 Error Log 00:19:35.957 ========= 00:19:35.957 Entry: 0 00:19:35.957 Error Count: 0x3 00:19:35.957 Submission Queue Id: 0x0 00:19:35.957 Command Id: 0x5 00:19:35.957 Phase Bit: 0 00:19:35.957 Status Code: 0x2 00:19:35.957 Status Code Type: 0x0 00:19:35.957 Do Not Retry: 1 00:19:35.957 Error Location: 0x28 00:19:35.957 LBA: 0x0 00:19:35.957 Namespace: 0x0 00:19:35.957 Vendor Log Page: 0x0 00:19:35.957 ----------- 00:19:35.957 Entry: 1 00:19:35.957 Error Count: 0x2 00:19:35.957 Submission Queue Id: 0x0 00:19:35.957 Command Id: 0x5 00:19:35.957 Phase Bit: 0 00:19:35.957 Status Code: 0x2 00:19:35.957 Status Code Type: 0x0 00:19:35.957 Do Not Retry: 1 00:19:35.957 Error Location: 0x28 00:19:35.957 LBA: 0x0 00:19:35.957 Namespace: 0x0 00:19:35.957 Vendor Log Page: 0x0 00:19:35.957 ----------- 00:19:35.957 Entry: 2 00:19:35.957 Error Count: 0x1 00:19:35.957 Submission Queue Id: 0x0 00:19:35.957 Command Id: 0x4 00:19:35.957 Phase Bit: 0 00:19:35.957 Status Code: 0x2 00:19:35.957 Status Code Type: 0x0 00:19:35.957 Do Not Retry: 1 00:19:35.957 Error Location: 0x28 00:19:35.957 LBA: 0x0 00:19:35.957 Namespace: 0x0 00:19:35.957 Vendor Log Page: 0x0 00:19:35.957 00:19:35.957 Number of Queues 00:19:35.957 ================ 00:19:35.957 Number of I/O Submission Queues: 128 00:19:35.957 Number of I/O Completion Queues: 128 00:19:35.957 00:19:35.957 ZNS Specific Controller Data 00:19:35.957 ============================ 00:19:35.957 Zone Append Size Limit: 0 00:19:35.957 00:19:35.957 00:19:35.957 Active Namespaces 00:19:35.957 ================= 00:19:35.957 get_feature(0x05) failed 00:19:35.957 Namespace ID:1 00:19:35.957 Command Set Identifier: NVM (00h) 00:19:35.957 Deallocate: Supported 00:19:35.957 Deallocated/Unwritten Error: Not Supported 00:19:35.957 Deallocated Read Value: Unknown 00:19:35.957 Deallocate in Write Zeroes: Not Supported 00:19:35.957 Deallocated Guard Field: 0xFFFF 00:19:35.957 Flush: Supported 00:19:35.957 Reservation: Not Supported 00:19:35.957 Namespace Sharing Capabilities: Multiple Controllers 00:19:35.957 Size (in LBAs): 1310720 (5GiB) 00:19:35.957 Capacity (in LBAs): 1310720 (5GiB) 00:19:35.957 Utilization (in LBAs): 1310720 (5GiB) 00:19:35.957 UUID: 5e9074aa-8ac1-4114-b60f-e41f2a894845 00:19:35.957 Thin Provisioning: Not Supported 00:19:35.957 Per-NS Atomic Units: Yes 00:19:35.957 Atomic Boundary Size (Normal): 0 00:19:35.957 Atomic Boundary Size (PFail): 0 00:19:35.957 Atomic Boundary Offset: 0 00:19:35.957 NGUID/EUI64 Never Reused: No 00:19:35.957 ANA group ID: 1 00:19:35.957 Namespace Write Protected: No 00:19:35.957 Number of LBA Formats: 1 00:19:35.957 Current LBA Format: LBA Format #00 00:19:35.957 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:35.957 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@508 -- # nvmfcleanup 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:35.957 rmmod nvme_tcp 00:19:35.957 rmmod nvme_fabrics 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@509 -- # '[' -n '' ']' 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # iptr 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # iptables-save 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # iptables-restore 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:19:35.957 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # remove_spdk_ns 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # return 0 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:36.216 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # echo 0 00:19:36.474 21:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.474 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@709 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:36.474 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:36.474 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@711 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.474 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # modules=(/sys/module/nvmet/holders/*) 00:19:36.474 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # modprobe -r nvmet_tcp nvmet 00:19:36.474 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:37.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.300 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:37.300 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:37.300 00:19:37.300 real 0m3.321s 00:19:37.300 user 0m1.090s 00:19:37.300 sys 0m1.539s 00:19:37.300 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:37.300 21:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.300 ************************************ 00:19:37.300 END TEST nvmf_identify_kernel_target 00:19:37.300 ************************************ 00:19:37.300 21:00:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:37.300 21:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:37.300 21:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:37.300 21:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.300 ************************************ 00:19:37.300 START TEST nvmf_auth_host 00:19:37.300 ************************************ 00:19:37.300 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:37.559 * Looking for test storage... 00:19:37.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.559 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # prepare_net_devs 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # local -g is_hw=no 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # remove_spdk_ns 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # nvmf_veth_init 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:19:37.560 Cannot find device "nvmf_init_br" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:19:37.560 Cannot find device "nvmf_init_br2" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:19:37.560 Cannot find device "nvmf_tgt_br" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.560 Cannot find device "nvmf_tgt_br2" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:19:37.560 Cannot find device "nvmf_init_br" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:19:37.560 Cannot find device "nvmf_init_br2" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:19:37.560 Cannot find device "nvmf_tgt_br" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:19:37.560 Cannot find device "nvmf_tgt_br2" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:19:37.560 Cannot find device "nvmf_br" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:19:37.560 Cannot find device "nvmf_init_if" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:19:37.560 Cannot find device "nvmf_init_if2" 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:37.560 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:37.819 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:37.820 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.820 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:37.820 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:19:37.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:19:37.820 00:19:37.820 --- 10.0.0.3 ping statistics --- 00:19:37.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.820 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:37.820 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:19:37.820 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:37.820 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:19:37.820 00:19:37.820 --- 10.0.0.4 ping statistics --- 00:19:37.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.820 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:37.820 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:37.820 00:19:37.820 --- 10.0.0.1 ping statistics --- 00:19:37.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.820 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:38.078 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:38.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:38.079 00:19:38.079 --- 10.0.0.2 ping statistics --- 00:19:38.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.079 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # return 0 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # nvmfpid=90792 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # waitforlisten 90792 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 90792 ']' 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:38.079 21:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.014 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=null 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=32 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=a718841f5291b80973c233c0b23786a4 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-null.XXX 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-null.pCR 00:19:39.273 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key a718841f5291b80973c233c0b23786a4 0 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 a718841f5291b80973c233c0b23786a4 0 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=a718841f5291b80973c233c0b23786a4 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=0 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-null.pCR 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-null.pCR 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pCR 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=sha512 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=64 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=0e2fae6686b94c35994f4d16da8f641bc1ce60a41de0c121cdae4b4eafca616c 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha512.XXX 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha512.POV 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key 0e2fae6686b94c35994f4d16da8f641bc1ce60a41de0c121cdae4b4eafca616c 3 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 0e2fae6686b94c35994f4d16da8f641bc1ce60a41de0c121cdae4b4eafca616c 3 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=0e2fae6686b94c35994f4d16da8f641bc1ce60a41de0c121cdae4b4eafca616c 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=3 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha512.POV 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha512.POV 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.POV 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=null 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=48 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=e1f840a04da5abebad331ba242fd89903e7578a89482f1c9 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-null.XXX 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-null.w6K 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key e1f840a04da5abebad331ba242fd89903e7578a89482f1c9 0 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 e1f840a04da5abebad331ba242fd89903e7578a89482f1c9 0 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=e1f840a04da5abebad331ba242fd89903e7578a89482f1c9 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=0 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-null.w6K 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-null.w6K 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.w6K 00:19:39.274 21:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=sha384 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=48 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=a86d9660c46b9fcec07dd1b634de464677e3164f226a1ccd 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha384.XXX 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha384.oto 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key a86d9660c46b9fcec07dd1b634de464677e3164f226a1ccd 2 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 a86d9660c46b9fcec07dd1b634de464677e3164f226a1ccd 2 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=a86d9660c46b9fcec07dd1b634de464677e3164f226a1ccd 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=2 00:19:39.274 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha384.oto 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha384.oto 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oto 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=sha256 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=32 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=ddb325d3505076e8908e260002ba836b 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha256.XXX 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha256.AWx 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key ddb325d3505076e8908e260002ba836b 1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 ddb325d3505076e8908e260002ba836b 1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=ddb325d3505076e8908e260002ba836b 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha256.AWx 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha256.AWx 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.AWx 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=sha256 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=32 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=842d409dff6e658f04b64a04b81a8108 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha256.XXX 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha256.02j 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key 842d409dff6e658f04b64a04b81a8108 1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 842d409dff6e658f04b64a04b81a8108 1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=842d409dff6e658f04b64a04b81a8108 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=1 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha256.02j 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha256.02j 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.02j 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.533 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=sha384 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=48 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=0bd543f1b1db9dda7cea420480a14a9be458e09bbc573f48 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha384.XXX 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha384.trQ 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key 0bd543f1b1db9dda7cea420480a14a9be458e09bbc573f48 2 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 0bd543f1b1db9dda7cea420480a14a9be458e09bbc573f48 2 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=0bd543f1b1db9dda7cea420480a14a9be458e09bbc573f48 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=2 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha384.trQ 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha384.trQ 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.trQ 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=null 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=32 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=ba5cb8c9e3bfd233237f6a15f01ca7a8 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-null.XXX 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-null.zqx 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key ba5cb8c9e3bfd233237f6a15f01ca7a8 0 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 ba5cb8c9e3bfd233237f6a15f01ca7a8 0 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=ba5cb8c9e3bfd233237f6a15f01ca7a8 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=0 00:19:39.534 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-null.zqx 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-null.zqx 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.zqx 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # local digest len file key 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # local -A digests 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # digest=sha512 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # len=64 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # key=7552d5977f7b0600f1ea7c74eeeba34e182bf1d3d4fe7255f3fdef1f2b3e1d37 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # mktemp -t spdk.key-sha512.XXX 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # file=/tmp/spdk.key-sha512.fg0 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # format_dhchap_key 7552d5977f7b0600f1ea7c74eeeba34e182bf1d3d4fe7255f3fdef1f2b3e1d37 3 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@739 -- # format_key DHHC-1 7552d5977f7b0600f1ea7c74eeeba34e182bf1d3d4fe7255f3fdef1f2b3e1d37 3 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # local prefix key digest 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # prefix=DHHC-1 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # key=7552d5977f7b0600f1ea7c74eeeba34e182bf1d3d4fe7255f3fdef1f2b3e1d37 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digest=3 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@725 -- # python - 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # chmod 0600 /tmp/spdk.key-sha512.fg0 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # echo /tmp/spdk.key-sha512.fg0 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.fg0 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 90792 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 90792 ']' 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:39.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:39.793 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pCR 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.POV ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.POV 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.w6K 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oto ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oto 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.AWx 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.02j ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.02j 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.trQ 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.zqx ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.zqx 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fg0 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # nvmet=/sys/kernel/config/nvmet 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@655 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@657 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # local block nvme 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # [[ ! -e /sys/module/nvmet ]] 00:19:40.052 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # modprobe nvmet 00:19:40.311 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:40.311 21:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:40.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:40.569 Waiting for block devices as requested 00:19:40.569 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:40.827 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # is_block_zoned nvme0n1 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # block_in_use nvme0n1 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:19:41.394 21:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:41.394 No valid GPT data, bailing 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n1 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # is_block_zoned nvme0n2 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # block_in_use nvme0n2 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:41.394 No valid GPT data, bailing 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n2 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # is_block_zoned nvme0n3 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # block_in_use nvme0n3 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:19:41.394 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:41.653 No valid GPT data, bailing 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n3 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # is_block_zoned nvme1n1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # block_in_use nvme1n1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:41.653 No valid GPT data, bailing 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # nvme=/dev/nvme1n1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # [[ -b /dev/nvme1n1 ]] 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # echo 1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # echo /dev/nvme1n1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo 1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 10.0.0.1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo tcp 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 4420 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo ipv4 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -a 10.0.0.1 -t tcp -s 4420 00:19:41.653 00:19:41.653 Discovery Log Number of Records 2, Generation counter 2 00:19:41.653 =====Discovery Log Entry 0====== 00:19:41.653 trtype: tcp 00:19:41.653 adrfam: ipv4 00:19:41.653 subtype: current discovery subsystem 00:19:41.653 treq: not specified, sq flow control disable supported 00:19:41.653 portid: 1 00:19:41.653 trsvcid: 4420 00:19:41.653 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:41.653 traddr: 10.0.0.1 00:19:41.653 eflags: none 00:19:41.653 sectype: none 00:19:41.653 =====Discovery Log Entry 1====== 00:19:41.653 trtype: tcp 00:19:41.653 adrfam: ipv4 00:19:41.653 subtype: nvme subsystem 00:19:41.653 treq: not specified, sq flow control disable supported 00:19:41.653 portid: 1 00:19:41.653 trsvcid: 4420 00:19:41.653 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:41.653 traddr: 10.0.0.1 00:19:41.653 eflags: none 00:19:41.653 sectype: none 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.653 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.913 nvme0n1 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:41.913 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.173 nvme0n1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.173 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.432 nvme0n1 00:19:42.432 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.432 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.432 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.432 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.432 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.432 21:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:42.432 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.433 nvme0n1 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.433 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.690 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.690 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.691 nvme0n1 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.691 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.950 nvme0n1 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.950 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.208 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.209 21:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.468 nvme0n1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.468 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.727 nvme0n1 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.727 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.728 nvme0n1 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.728 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.986 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 nvme0n1 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:43.987 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 nvme0n1 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.246 21:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:44.814 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.073 nvme0n1 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.073 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.331 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.331 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.332 21:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.332 nvme0n1 00:19:45.332 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.332 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.332 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.332 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.332 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.332 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.591 nvme0n1 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.591 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.850 nvme0n1 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.850 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.109 nvme0n1 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.109 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.386 21:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.299 nvme0n1 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.299 21:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.299 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.300 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.867 nvme0n1 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:48.867 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 nvme0n1 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.127 21:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.695 nvme0n1 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.695 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.954 nvme0n1 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:49.954 21:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.521 nvme0n1 00:19:50.521 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:50.521 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.521 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.521 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:50.521 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.521 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:50.780 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.347 nvme0n1 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:51.347 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:51.348 21:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.348 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.916 nvme0n1 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:51.916 21:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 nvme0n1 00:19:52.483 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:52.483 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.483 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:52.483 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.483 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.483 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:52.742 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 nvme0n1 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.309 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.310 21:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.310 nvme0n1 00:19:53.310 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.310 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.310 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.310 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.310 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.569 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.570 nvme0n1 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.570 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.829 nvme0n1 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.829 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:53.830 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.089 nvme0n1 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.089 nvme0n1 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.089 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.349 21:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 nvme0n1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.349 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 nvme0n1 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:54.608 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.609 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 nvme0n1 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.868 nvme0n1 00:19:54.868 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:55.127 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 nvme0n1 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.128 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.387 21:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.387 nvme0n1 00:19:55.387 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.387 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.387 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.387 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.387 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.387 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:55.646 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.647 nvme0n1 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.647 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.906 nvme0n1 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:55.906 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:56.165 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.166 nvme0n1 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.166 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.425 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.425 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.425 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.425 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.425 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.425 21:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.425 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.684 nvme0n1 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.684 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.685 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.944 nvme0n1 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.944 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:57.203 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:57.203 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:57.203 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.203 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.203 21:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.462 nvme0n1 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.462 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.721 nvme0n1 00:19:57.721 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.721 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.721 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.721 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.721 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.721 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:57.980 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.239 nvme0n1 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.239 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.240 21:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.807 nvme0n1 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:58.807 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:58.808 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.808 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:58.808 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.375 nvme0n1 00:19:59.375 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.375 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.375 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.375 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.375 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.376 21:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.376 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.944 nvme0n1 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:19:59.944 21:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.512 nvme0n1 00:20:00.512 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:00.512 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.512 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.512 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:00.512 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:00.771 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:00.772 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:00.772 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.339 nvme0n1 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.339 21:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.339 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.907 nvme0n1 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:01.907 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.201 nvme0n1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.201 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.478 nvme0n1 00:20:02.478 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.478 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.478 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.478 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.478 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.478 21:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:20:02.478 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.479 nvme0n1 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.479 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 nvme0n1 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 nvme0n1 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.738 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.997 nvme0n1 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:02.997 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.257 nvme0n1 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.257 21:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.257 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.516 nvme0n1 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:03.516 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.517 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.775 nvme0n1 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.775 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:03.776 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 nvme0n1 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.035 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.294 nvme0n1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.294 21:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.553 nvme0n1 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.553 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.811 nvme0n1 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:04.811 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.069 nvme0n1 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.070 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.329 nvme0n1 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.329 21:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.329 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.330 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.588 nvme0n1 00:20:05.588 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.588 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.588 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.588 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.588 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.846 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:05.847 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.106 nvme0n1 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.106 21:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.673 nvme0n1 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.673 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.674 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.933 nvme0n1 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:06.933 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:07.191 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:07.192 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:07.192 21:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.450 nvme0n1 00:20:07.450 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:07.450 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.450 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.450 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTcxODg0MWY1MjkxYjgwOTczYzIzM2MwYjIzNzg2YTRP8d95: 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGUyZmFlNjY4NmI5NGMzNTk5NGY0ZDE2ZGE4ZjY0MWJjMWNlNjBhNDFkZTBjMTIxY2RhZTRiNGVhZmNhNjE2Yze/AZI=: 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:07.451 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.018 nvme0n1 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.018 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.019 21:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.585 nvme0n1 00:20:08.585 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.585 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.585 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.585 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.585 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.585 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZGRiMzI1ZDM1MDUwNzZlODkwOGUyNjAwMDJiYTgzNmLXhpbb: 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODQyZDQwOWRmZjZlNjU4ZjA0YjY0YTA0YjgxYTgxMDiVTncg: 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:08.844 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 nvme0n1 00:20:09.411 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.411 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.411 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.411 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.411 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 21:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGJkNTQzZjFiMWRiOWRkYTdjZWE0MjA0ODBhMTRhOWJlNDU4ZTA5YmJjNTczZjQ44dgJ7w==: 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmE1Y2I4YzllM2JmZDIzMzIzN2Y2YTE1ZjAxY2E3YTjTl1Hl: 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.411 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.979 nvme0n1 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzU1MmQ1OTc3ZjdiMDYwMGYxZWE3Yzc0ZWVlYmEzNGUxODJiZjFkM2Q0ZmU3MjU1ZjNmZGVmMWYyYjNlMWQzN42jzEo=: 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:09.979 21:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.546 nvme0n1 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:10.546 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTFmODQwYTA0ZGE1YWJlYmFkMzMxYmEyNDJmZDg5OTAzZTc1NzhhODk0ODJmMWM5M9UlQA==: 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: ]] 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTg2ZDk2NjBjNDZiOWZjZWMwN2RkMWI2MzRkZTQ2NDY3N2UzMTY0ZjIyNmExY2NkqKY0mw==: 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@646 -- # local es=0 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:10.547 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@649 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.806 request: 00:20:10.806 { 00:20:10.806 "name": "nvme0", 00:20:10.806 "trtype": "tcp", 00:20:10.806 "traddr": "10.0.0.1", 00:20:10.806 "adrfam": "ipv4", 00:20:10.806 "trsvcid": "4420", 00:20:10.806 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:10.806 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:10.806 "prchk_reftag": false, 00:20:10.806 "prchk_guard": false, 00:20:10.806 "hdgst": false, 00:20:10.806 "ddgst": false, 00:20:10.806 "method": "bdev_nvme_attach_controller", 00:20:10.806 "req_id": 1 00:20:10.806 } 00:20:10.806 Got JSON-RPC error response 00:20:10.806 response: 00:20:10.806 { 00:20:10.806 "code": -5, 00:20:10.806 "message": "Input/output error" 00:20:10.806 } 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@649 -- # es=1 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@646 -- # local es=0 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@649 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.806 request: 00:20:10.806 { 00:20:10.806 "name": "nvme0", 00:20:10.806 "trtype": "tcp", 00:20:10.806 "traddr": "10.0.0.1", 00:20:10.806 "adrfam": "ipv4", 00:20:10.806 "trsvcid": "4420", 00:20:10.806 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:10.806 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:10.806 "prchk_reftag": false, 00:20:10.806 "prchk_guard": false, 00:20:10.806 "hdgst": false, 00:20:10.806 "ddgst": false, 00:20:10.806 "dhchap_key": "key2", 00:20:10.806 "method": "bdev_nvme_attach_controller", 00:20:10.806 "req_id": 1 00:20:10.806 } 00:20:10.806 Got JSON-RPC error response 00:20:10.806 response: 00:20:10.806 { 00:20:10.806 "code": -5, 00:20:10.806 "message": "Input/output error" 00:20:10.806 } 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@649 -- # es=1 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@761 -- # local ip 00:20:10.806 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # ip_candidates=() 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@762 -- # local -A ip_candidates 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@646 -- # local es=0 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@649 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.807 request: 00:20:10.807 { 00:20:10.807 "name": "nvme0", 00:20:10.807 "trtype": "tcp", 00:20:10.807 "traddr": "10.0.0.1", 00:20:10.807 "adrfam": "ipv4", 00:20:10.807 "trsvcid": "4420", 00:20:10.807 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:10.807 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:10.807 "prchk_reftag": false, 00:20:10.807 "prchk_guard": false, 00:20:10.807 "hdgst": false, 00:20:10.807 "ddgst": false, 00:20:10.807 "dhchap_key": "key1", 00:20:10.807 "dhchap_ctrlr_key": "ckey2", 00:20:10.807 "method": "bdev_nvme_attach_controller", 00:20:10.807 "req_id": 1 00:20:10.807 } 00:20:10.807 Got JSON-RPC error response 00:20:10.807 response: 00:20:10.807 { 00:20:10.807 "code": -5, 00:20:10.807 "message": "Input/output error" 00:20:10.807 } 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@649 -- # es=1 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # nvmfcleanup 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.807 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.065 rmmod nvme_tcp 00:20:11.065 rmmod nvme_fabrics 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # '[' -n 90792 ']' 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # killprocess 90792 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 90792 ']' 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 90792 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90792 00:20:11.065 killing process with pid 90792 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90792' 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 90792 00:20:11.065 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 90792 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # iptr 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # iptables-save 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # iptables-restore 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:20:11.323 21:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # remove_spdk_ns 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.323 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # return 0 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # echo 0 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@711 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # modules=(/sys/module/nvmet/holders/*) 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # modprobe -r nvmet_tcp nvmet 00:20:11.581 21:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:12.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:12.405 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:12.406 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:12.406 21:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pCR /tmp/spdk.key-null.w6K /tmp/spdk.key-sha256.AWx /tmp/spdk.key-sha384.trQ /tmp/spdk.key-sha512.fg0 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:12.406 21:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:12.970 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:12.970 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:12.970 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:12.970 00:20:12.970 real 0m35.549s 00:20:12.970 user 0m32.109s 00:20:12.970 sys 0m4.178s 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.970 ************************************ 00:20:12.970 END TEST nvmf_auth_host 00:20:12.970 ************************************ 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.970 ************************************ 00:20:12.970 START TEST nvmf_digest 00:20:12.970 ************************************ 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:12.970 * Looking for test storage... 00:20:12.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.970 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # prepare_net_devs 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # local -g is_hw=no 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # remove_spdk_ns 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:20:13.229 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # nvmf_veth_init 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:20:13.230 Cannot find device "nvmf_init_br" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:20:13.230 Cannot find device "nvmf_init_br2" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:20:13.230 Cannot find device "nvmf_tgt_br" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.230 Cannot find device "nvmf_tgt_br2" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:20:13.230 Cannot find device "nvmf_init_br" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:20:13.230 Cannot find device "nvmf_init_br2" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:20:13.230 Cannot find device "nvmf_tgt_br" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:20:13.230 Cannot find device "nvmf_tgt_br2" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:20:13.230 Cannot find device "nvmf_br" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:20:13.230 Cannot find device "nvmf_init_if" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:20:13.230 Cannot find device "nvmf_init_if2" 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:13.230 21:01:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:13.230 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:20:13.489 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.489 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:20:13.489 00:20:13.489 --- 10.0.0.3 ping statistics --- 00:20:13.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.489 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:20:13.489 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:13.489 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.118 ms 00:20:13.489 00:20:13.489 --- 10.0.0.4 ping statistics --- 00:20:13.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.489 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:13.489 00:20:13.489 --- 10.0.0.1 ping statistics --- 00:20:13.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.489 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:13.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:20:13.489 00:20:13.489 --- 10.0.0.2 ping statistics --- 00:20:13.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.489 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@453 -- # return 0 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:13.489 ************************************ 00:20:13.489 START TEST nvmf_digest_clean 00:20:13.489 ************************************ 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@501 -- # nvmfpid=92415 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@502 -- # waitforlisten 92415 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 92415 ']' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:13.489 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:13.489 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:13.489 [2024-08-11 21:01:24.251063] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:13.489 [2024-08-11 21:01:24.251176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.748 [2024-08-11 21:01:24.387918] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.748 [2024-08-11 21:01:24.477797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.748 [2024-08-11 21:01:24.477861] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.748 [2024-08-11 21:01:24.477889] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.748 [2024-08-11 21:01:24.477897] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.748 [2024-08-11 21:01:24.477904] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.748 [2024-08-11 21:01:24.477939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.748 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:13.748 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:13.748 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:20:13.748 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.748 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:14.006 [2024-08-11 21:01:24.615786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:14.006 null0 00:20:14.006 [2024-08-11 21:01:24.663146] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.006 [2024-08-11 21:01:24.687297] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92434 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92434 /var/tmp/bperf.sock 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 92434 ']' 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:14.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:14.006 21:01:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:14.006 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:14.006 [2024-08-11 21:01:24.749506] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:14.006 [2024-08-11 21:01:24.749661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92434 ] 00:20:14.265 [2024-08-11 21:01:24.890264] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.265 [2024-08-11 21:01:24.990306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.265 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.265 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:14.265 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:14.265 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:14.265 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:14.523 [2024-08-11 21:01:25.298435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:14.781 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:14.781 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.039 nvme0n1 00:20:15.039 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:15.039 21:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:15.039 Running I/O for 2 seconds... 00:20:17.571 00:20:17.571 Latency(us) 00:20:17.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.571 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:17.571 nvme0n1 : 2.01 16319.10 63.75 0.00 0.00 7840.08 7149.38 17039.36 00:20:17.571 =================================================================================================================== 00:20:17.571 Total : 16319.10 63.75 0.00 0.00 7840.08 7149.38 17039.36 00:20:17.571 0 00:20:17.571 21:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:17.571 21:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:17.571 21:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:17.571 21:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:17.571 21:01:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:17.571 | select(.opcode=="crc32c") 00:20:17.571 | "\(.module_name) \(.executed)"' 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92434 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 92434 ']' 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 92434 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92434 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:17.571 killing process with pid 92434 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92434' 00:20:17.571 Received shutdown signal, test time was about 2.000000 seconds 00:20:17.571 00:20:17.571 Latency(us) 00:20:17.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.571 =================================================================================================================== 00:20:17.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 92434 00:20:17.571 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 92434 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92487 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92487 /var/tmp/bperf.sock 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 92487 ']' 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:17.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:17.831 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:17.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:17.831 Zero copy mechanism will not be used. 00:20:17.831 [2024-08-11 21:01:28.396082] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:17.831 [2024-08-11 21:01:28.396189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92487 ] 00:20:17.831 [2024-08-11 21:01:28.529254] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.090 [2024-08-11 21:01:28.618965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.090 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:18.090 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:18.090 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:18.090 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:18.090 21:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:18.348 [2024-08-11 21:01:28.967224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:18.348 21:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.348 21:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.607 nvme0n1 00:20:18.607 21:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:18.607 21:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:18.866 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:18.866 Zero copy mechanism will not be used. 00:20:18.866 Running I/O for 2 seconds... 00:20:20.839 00:20:20.839 Latency(us) 00:20:20.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.839 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:20.839 nvme0n1 : 2.00 8132.54 1016.57 0.00 0.00 1964.25 1765.00 6196.13 00:20:20.839 =================================================================================================================== 00:20:20.839 Total : 8132.54 1016.57 0.00 0.00 1964.25 1765.00 6196.13 00:20:20.839 0 00:20:20.839 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:20.839 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:20.839 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:20.839 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:20.839 | select(.opcode=="crc32c") 00:20:20.839 | "\(.module_name) \(.executed)"' 00:20:20.839 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92487 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 92487 ']' 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 92487 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92487 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:21.099 killing process with pid 92487 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92487' 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 92487 00:20:21.099 Received shutdown signal, test time was about 2.000000 seconds 00:20:21.099 00:20:21.099 Latency(us) 00:20:21.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.099 =================================================================================================================== 00:20:21.099 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.099 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 92487 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92534 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92534 /var/tmp/bperf.sock 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 92534 ']' 00:20:21.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.358 21:01:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.358 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:21.358 [2024-08-11 21:01:32.034012] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:21.358 [2024-08-11 21:01:32.034107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92534 ] 00:20:21.617 [2024-08-11 21:01:32.168256] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.617 [2024-08-11 21:01:32.261362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.617 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:21.617 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:21.617 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:21.617 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:21.617 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:21.875 [2024-08-11 21:01:32.565621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.875 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:21.875 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:22.443 nvme0n1 00:20:22.443 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:22.443 21:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:22.443 Running I/O for 2 seconds... 00:20:24.348 00:20:24.348 Latency(us) 00:20:24.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.348 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:24.348 nvme0n1 : 2.00 17430.75 68.09 0.00 0.00 7336.87 2412.92 14775.39 00:20:24.348 =================================================================================================================== 00:20:24.348 Total : 17430.75 68.09 0.00 0.00 7336.87 2412.92 14775.39 00:20:24.348 0 00:20:24.348 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:24.607 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:24.607 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:24.607 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:24.607 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:24.607 | select(.opcode=="crc32c") 00:20:24.607 | "\(.module_name) \(.executed)"' 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92534 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 92534 ']' 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 92534 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92534 00:20:24.866 killing process with pid 92534 00:20:24.866 Received shutdown signal, test time was about 2.000000 seconds 00:20:24.866 00:20:24.866 Latency(us) 00:20:24.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.866 =================================================================================================================== 00:20:24.866 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92534' 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 92534 00:20:24.866 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 92534 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92588 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92588 /var/tmp/bperf.sock 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 92588 ']' 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:25.125 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:25.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:25.126 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:25.126 21:01:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:25.126 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:25.126 [2024-08-11 21:01:35.705537] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:25.126 [2024-08-11 21:01:35.705815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:20:25.126 Zero copy mechanism will not be used. 00:20:25.126 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92588 ] 00:20:25.126 [2024-08-11 21:01:35.844259] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.384 [2024-08-11 21:01:35.935832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.952 21:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:25.953 21:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:20:25.953 21:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:25.953 21:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:25.953 21:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.520 [2024-08-11 21:01:37.013939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.520 21:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.520 21:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.779 nvme0n1 00:20:26.779 21:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:26.779 21:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:26.779 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:26.779 Zero copy mechanism will not be used. 00:20:26.779 Running I/O for 2 seconds... 00:20:29.312 00:20:29.312 Latency(us) 00:20:29.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.312 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:29.312 nvme0n1 : 2.00 6738.85 842.36 0.00 0.00 2369.09 1936.29 7983.48 00:20:29.312 =================================================================================================================== 00:20:29.312 Total : 6738.85 842.36 0.00 0.00 2369.09 1936.29 7983.48 00:20:29.312 0 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.312 | select(.opcode=="crc32c") 00:20:29.312 | "\(.module_name) \(.executed)"' 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92588 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 92588 ']' 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 92588 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92588 00:20:29.312 killing process with pid 92588 00:20:29.312 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.312 00:20:29.312 Latency(us) 00:20:29.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.312 =================================================================================================================== 00:20:29.312 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92588' 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 92588 00:20:29.312 21:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 92588 00:20:29.312 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92415 00:20:29.312 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 92415 ']' 00:20:29.312 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 92415 00:20:29.312 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:20:29.312 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:29.312 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92415 00:20:29.313 killing process with pid 92415 00:20:29.313 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:29.313 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:29.313 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92415' 00:20:29.313 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 92415 00:20:29.313 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 92415 00:20:29.571 00:20:29.571 real 0m16.092s 00:20:29.571 user 0m31.382s 00:20:29.571 sys 0m4.615s 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:29.571 ************************************ 00:20:29.571 END TEST nvmf_digest_clean 00:20:29.571 ************************************ 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:29.571 ************************************ 00:20:29.571 START TEST nvmf_digest_error 00:20:29.571 ************************************ 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@501 -- # nvmfpid=92671 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@502 -- # waitforlisten 92671 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 92671 ']' 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:29.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:29.571 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:29.830 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:29.830 [2024-08-11 21:01:40.395351] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:29.830 [2024-08-11 21:01:40.395614] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.830 [2024-08-11 21:01:40.530006] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.089 [2024-08-11 21:01:40.614751] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.089 [2024-08-11 21:01:40.615092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.089 [2024-08-11 21:01:40.615221] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.089 [2024-08-11 21:01:40.615275] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.089 [2024-08-11 21:01:40.615379] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.089 [2024-08-11 21:01:40.615462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.089 [2024-08-11 21:01:40.692127] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.089 [2024-08-11 21:01:40.757158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.089 null0 00:20:30.089 [2024-08-11 21:01:40.804058] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.089 [2024-08-11 21:01:40.828175] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:30.089 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92701 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92701 /var/tmp/bperf.sock 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 92701 ']' 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:30.090 21:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.349 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:30.349 [2024-08-11 21:01:40.890803] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:30.349 [2024-08-11 21:01:40.891053] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92701 ] 00:20:30.349 [2024-08-11 21:01:41.030859] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.349 [2024-08-11 21:01:41.109235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.607 [2024-08-11 21:01:41.161365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.607 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:30.607 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:30.607 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:30.607 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:30.865 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:30.865 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:30.865 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.865 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:30.865 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.866 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.124 nvme0n1 00:20:31.124 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:31.124 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:31.124 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.124 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:31.124 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:31.124 21:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:31.382 Running I/O for 2 seconds... 00:20:31.382 [2024-08-11 21:01:42.007779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.382 [2024-08-11 21:01:42.007847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.382 [2024-08-11 21:01:42.007879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.382 [2024-08-11 21:01:42.023312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.382 [2024-08-11 21:01:42.023351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.382 [2024-08-11 21:01:42.023380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.382 [2024-08-11 21:01:42.038187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.382 [2024-08-11 21:01:42.038507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.382 [2024-08-11 21:01:42.038543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.382 [2024-08-11 21:01:42.055487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.382 [2024-08-11 21:01:42.055526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.382 [2024-08-11 21:01:42.055555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.382 [2024-08-11 21:01:42.070469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.382 [2024-08-11 21:01:42.070508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.382 [2024-08-11 21:01:42.070537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.382 [2024-08-11 21:01:42.086157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.383 [2024-08-11 21:01:42.086197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.383 [2024-08-11 21:01:42.086227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.383 [2024-08-11 21:01:42.101291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.383 [2024-08-11 21:01:42.101329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.383 [2024-08-11 21:01:42.101358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.383 [2024-08-11 21:01:42.116358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.383 [2024-08-11 21:01:42.116396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.383 [2024-08-11 21:01:42.116424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.383 [2024-08-11 21:01:42.130975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.383 [2024-08-11 21:01:42.131014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.383 [2024-08-11 21:01:42.131043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.383 [2024-08-11 21:01:42.145647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.383 [2024-08-11 21:01:42.145862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.383 [2024-08-11 21:01:42.145896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.641 [2024-08-11 21:01:42.160582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.641 [2024-08-11 21:01:42.160630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.160659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.175388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.175427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.175456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.190207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.190246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.190275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.204887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.204952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.219556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.219765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.219799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.234455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.234494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.234524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.249256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.249294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.249323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.263984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.264194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.264228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.278925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.279124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.279157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.293827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.294013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.294046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.308770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.308960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.308994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.323658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.323827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.323861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.338667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.338836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.338869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.353071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.353109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.353138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.367334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.367369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.367398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.381649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.381810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.381843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.396010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.396046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.396075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.642 [2024-08-11 21:01:42.410230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.642 [2024-08-11 21:01:42.410266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.642 [2024-08-11 21:01:42.410293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.900 [2024-08-11 21:01:42.424366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.900 [2024-08-11 21:01:42.424403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.900 [2024-08-11 21:01:42.424431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.900 [2024-08-11 21:01:42.438529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.900 [2024-08-11 21:01:42.438565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.900 [2024-08-11 21:01:42.438593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.900 [2024-08-11 21:01:42.452741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.900 [2024-08-11 21:01:42.452777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.900 [2024-08-11 21:01:42.452805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.900 [2024-08-11 21:01:42.467155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.900 [2024-08-11 21:01:42.467190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.900 [2024-08-11 21:01:42.467218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.900 [2024-08-11 21:01:42.481484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.900 [2024-08-11 21:01:42.481523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.900 [2024-08-11 21:01:42.481551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.900 [2024-08-11 21:01:42.496229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.496427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.510978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.511033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.511061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.525822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.525859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.525886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.540168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.540204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.540232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.554304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.554340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.554368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.568427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.568462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.568489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.582582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.582626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.582654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.596726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.596761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.596788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.610911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.610945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.610972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.625027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.625062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.625089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.639158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.639193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.639220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.653281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.653316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.653344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:31.901 [2024-08-11 21:01:42.667599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:31.901 [2024-08-11 21:01:42.667633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:31.901 [2024-08-11 21:01:42.667660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.681703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.681737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.681764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.695944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.695979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.696006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.711487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.711523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.711551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.726186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.726221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.726248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.740714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.740749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.740777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.755628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.755662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.755689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.769835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.769870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.769897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.784098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.784132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.784159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.798435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.798469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.798496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.812738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.812772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.812799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.826958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.826993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.827020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.841162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.841197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.841223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.855328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.855377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.855389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.869419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.869453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.869481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.883541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.883576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.883602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.897684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.897719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.897747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.911883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.911919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.911946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.162 [2024-08-11 21:01:42.932407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.162 [2024-08-11 21:01:42.932444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.162 [2024-08-11 21:01:42.932471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:42.946849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:42.946885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:42.946913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:42.961207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:42.961243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:42.961270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:42.975343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:42.975378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:42.975404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:42.989474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:42.989509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:42.989536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.003662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.003696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.003723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.017987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.018022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.018050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.034046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.034104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.034117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.049055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.049092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.049120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.063904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.063957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.063971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.079298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.079335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.079363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.094583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.094632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.094661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.420 [2024-08-11 21:01:43.109699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.420 [2024-08-11 21:01:43.109736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.420 [2024-08-11 21:01:43.109764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.421 [2024-08-11 21:01:43.124726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.421 [2024-08-11 21:01:43.124761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.421 [2024-08-11 21:01:43.124789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.421 [2024-08-11 21:01:43.139325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.421 [2024-08-11 21:01:43.139361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.421 [2024-08-11 21:01:43.139390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.421 [2024-08-11 21:01:43.153997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.421 [2024-08-11 21:01:43.154030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.421 [2024-08-11 21:01:43.154057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.421 [2024-08-11 21:01:43.168516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.421 [2024-08-11 21:01:43.168552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.421 [2024-08-11 21:01:43.168579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.421 [2024-08-11 21:01:43.183052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.421 [2024-08-11 21:01:43.183087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.421 [2024-08-11 21:01:43.183115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.197558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.197618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.197632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.212108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.212143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.212170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.226444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.226479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.226506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.240527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.254629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.254663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.254690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.268747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.268781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.268808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.282933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.282967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.282994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.297020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.297070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.297081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.679 [2024-08-11 21:01:43.311140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.679 [2024-08-11 21:01:43.311174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.679 [2024-08-11 21:01:43.311200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.325288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.325322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.325349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.339815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.339849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.339876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.354020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.354055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.354091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.368207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.368241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.368269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.382300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.382335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.382363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.396392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.396426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.396453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.410497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.410533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.410560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.424625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.424660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.424687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.438746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.438796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.438808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.680 [2024-08-11 21:01:43.452839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.680 [2024-08-11 21:01:43.452873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.680 [2024-08-11 21:01:43.452900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.466947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.466981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.467008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.481048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.481081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.481108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.495153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.495187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.495214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.509230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.509264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.509292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.523772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.523822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.523833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.538455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.538490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.538518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.553215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.553249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.553277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.567338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.567372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.567400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.581446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.581481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.581508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.595533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.595567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.595594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.609638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.609671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.609699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.623751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.623785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.623812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.637855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.637889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.637916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.651968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.652002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.652029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.666160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.666194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.666222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.680234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.680267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.680294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.694769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.694802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.694830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.939 [2024-08-11 21:01:43.709397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:32.939 [2024-08-11 21:01:43.709431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.939 [2024-08-11 21:01:43.709459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.723786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.723822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.723850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.737922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.737956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.737983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.752030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.752064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.752091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.766141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.766175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.766202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.780231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.780265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.780292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.794336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.794371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.794398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.198 [2024-08-11 21:01:43.808424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.198 [2024-08-11 21:01:43.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.198 [2024-08-11 21:01:43.808485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.822528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.822562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.822590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.836659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.836693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.836720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.856848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.856883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.856910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.871082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.871117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.871143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.885218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.885253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.885280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.899463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.899497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.899524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.913654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.913687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.913714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.927809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.927844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.927872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.941977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.942011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.942039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.956086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.956121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.956149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.199 [2024-08-11 21:01:43.970185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.199 [2024-08-11 21:01:43.970219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.199 [2024-08-11 21:01:43.970247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.458 [2024-08-11 21:01:43.984339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ffb10) 00:20:33.458 [2024-08-11 21:01:43.984373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.458 [2024-08-11 21:01:43.984401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.458 00:20:33.458 Latency(us) 00:20:33.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:33.458 nvme0n1 : 2.01 17521.14 68.44 0.00 0.00 7299.99 6583.39 27763.43 00:20:33.458 =================================================================================================================== 00:20:33.458 Total : 17521.14 68.44 0.00 0.00 7299.99 6583.39 27763.43 00:20:33.458 0 00:20:33.458 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:33.458 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:33.458 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:33.458 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:33.458 | .driver_specific 00:20:33.458 | .nvme_error 00:20:33.458 | .status_code 00:20:33.458 | .command_transient_transport_error' 00:20:33.716 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92701 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 92701 ']' 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 92701 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92701 00:20:33.717 killing process with pid 92701 00:20:33.717 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.717 00:20:33.717 Latency(us) 00:20:33.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.717 =================================================================================================================== 00:20:33.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92701' 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 92701 00:20:33.717 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 92701 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92749 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92749 /var/tmp/bperf.sock 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 92749 ']' 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:33.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:33.975 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.975 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:33.975 [2024-08-11 21:01:44.604794] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:33.975 [2024-08-11 21:01:44.604910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92749 ] 00:20:33.975 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:33.975 Zero copy mechanism will not be used. 00:20:33.975 [2024-08-11 21:01:44.743694] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.234 [2024-08-11 21:01:44.817105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.234 [2024-08-11 21:01:44.867431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:34.234 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:34.234 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:34.234 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.234 21:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.493 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:34.493 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:34.493 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.493 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:34.493 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.493 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.761 nvme0n1 00:20:34.761 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:34.761 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:34.761 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.762 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:34.762 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:34.762 21:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:35.021 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:35.021 Zero copy mechanism will not be used. 00:20:35.021 Running I/O for 2 seconds... 00:20:35.021 [2024-08-11 21:01:45.625878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.625925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.625939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.629562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.629605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.629619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.633271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.633303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.633314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.636929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.636959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.636970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.640520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.640550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.640561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.644196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.644226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.644237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.647782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.647811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.647822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.651480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.651510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.651521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.655216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.655247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.655258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.658908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.658938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.658949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.662552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.662581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.666219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.666249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.669820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.669850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.669860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.673473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.673502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.673513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.677163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.677194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.677205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.680822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.021 [2024-08-11 21:01:45.680851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.021 [2024-08-11 21:01:45.680862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.021 [2024-08-11 21:01:45.684463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.684492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.684503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.688140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.688169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.688180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.691818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.691848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.691859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.695431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.695461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.695473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.699133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.699163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.699175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.702744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.702774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.702784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.706325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.706355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.706365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.709982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.710010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.710022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.713619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.713648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.713658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.717240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.717269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.717280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.720891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.720920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.720931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.724513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.724543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.724554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.728141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.728170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.728181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.731812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.731842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.731853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.735435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.735465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.735476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.739034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.739063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.739074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.742661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.742691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.742701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.746290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.746319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.746331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.749954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.749983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.749994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.753582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.753620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.753631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.757202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.757230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.757241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.760807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.760836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.760847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.764463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.764492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.764503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.768151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.768182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.768193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.771778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.771807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.771818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.775421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.775451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.775461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.022 [2024-08-11 21:01:45.779123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.022 [2024-08-11 21:01:45.779153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.022 [2024-08-11 21:01:45.779164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.023 [2024-08-11 21:01:45.782763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.023 [2024-08-11 21:01:45.782792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.023 [2024-08-11 21:01:45.782802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.023 [2024-08-11 21:01:45.786480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.023 [2024-08-11 21:01:45.786510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.023 [2024-08-11 21:01:45.786521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.023 [2024-08-11 21:01:45.790082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.023 [2024-08-11 21:01:45.790110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.023 [2024-08-11 21:01:45.790121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.023 [2024-08-11 21:01:45.793707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.023 [2024-08-11 21:01:45.793735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.023 [2024-08-11 21:01:45.793746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.023 [2024-08-11 21:01:45.797371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.023 [2024-08-11 21:01:45.797400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.023 [2024-08-11 21:01:45.797411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.801017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.801047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.801058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.804688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.804716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.804727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.808310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.808339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.808350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.812014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.812045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.812056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.815699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.815728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.815738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.819302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.819332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.819343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.823015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.823045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.823056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.826626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.826655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.826666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.830254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.830283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.830294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.833857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.833885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.283 [2024-08-11 21:01:45.833896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.283 [2024-08-11 21:01:45.837476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.283 [2024-08-11 21:01:45.837505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.837516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.841113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.841142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.841153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.844705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.844744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.848322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.848352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.848363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.851923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.851953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.851964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.855545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.855574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.855585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.859195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.859225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.859236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.862855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.862884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.862895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.866452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.866481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.866492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.870105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.870134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.870145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.873674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.873702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.873712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.877326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.877356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.877367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.881034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.881064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.881075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.884624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.884652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.884663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.888249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.888279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.888290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.891835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.891864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.891875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.895474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.895503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.895514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.899320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.899351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.899361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.902966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.902996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.903006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.906566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.906606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.906618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.910211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.910240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.910251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.913879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.913908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.913918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.917525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.917554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.917565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.921124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.921152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.921163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.924773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.924803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.924814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.928453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.928483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.284 [2024-08-11 21:01:45.928493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.284 [2024-08-11 21:01:45.932142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.284 [2024-08-11 21:01:45.932172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.932183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.935776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.935805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.935816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.939467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.939496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.939507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.943131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.943159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.943170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.946863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.946892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.946903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.950481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.950510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.950520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.954098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.954127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.954138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.957804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.957832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.957843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.961498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.961527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.961537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.965207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.965236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.965246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.968835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.968865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.968875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.972564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.972603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.972615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.976276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.976305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.976317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.980011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.980041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.980052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.983738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.983767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.983778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.987506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.987536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.987546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.991224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.991253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.991264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.994966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.994995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.995006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:45.998664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:45.998693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:45.998703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.002375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.002405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.002415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.006019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.006048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.006058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.009649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.009677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.009688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.013300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.013331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.013343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.017015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.017045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.017056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.020810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.020840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.020851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.024511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.024540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.024551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.028252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.028282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.028292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.032032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.285 [2024-08-11 21:01:46.032061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.285 [2024-08-11 21:01:46.032072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.285 [2024-08-11 21:01:46.035726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.035755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.035765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.286 [2024-08-11 21:01:46.039354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.039383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.039394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.286 [2024-08-11 21:01:46.043012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.043041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.043052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.286 [2024-08-11 21:01:46.046718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.046746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.046758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.286 [2024-08-11 21:01:46.050381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.050410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.050421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.286 [2024-08-11 21:01:46.054085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.054113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.054124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.286 [2024-08-11 21:01:46.057788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.286 [2024-08-11 21:01:46.057816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.286 [2024-08-11 21:01:46.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.061476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.061506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.061516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.065239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.065268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.065279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.068930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.068959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.068970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.072509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.072538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.072549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.076216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.076245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.076256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.079840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.079869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.079880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.083465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.083495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.083505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.087088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.087118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.087129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.090707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.090736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.090746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.094339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.094368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.094379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.098034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.098064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.098082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.101687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.101716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.101727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.105296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.105324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.105335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.108949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.108977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.108988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.112586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.112624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.112635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.116339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.116371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.116381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.119905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.119934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.119945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.123483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.123513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.123523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.127128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.127157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.127168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.130774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.130804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.130814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.134430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.134459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.134470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.138098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.138127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.138138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.141687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.141715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.141727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.145321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.145351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.145361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.148985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.149015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.547 [2024-08-11 21:01:46.149026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.547 [2024-08-11 21:01:46.152526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.547 [2024-08-11 21:01:46.152555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.152566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.156210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.156240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.156250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.159846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.159876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.159886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.163459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.163489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.163500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.167113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.167142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.167153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.170758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.170787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.170798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.174383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.174413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.174424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.178062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.178099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.178110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.181717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.181745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.181756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.185318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.185347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.185358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.188931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.188961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.188973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.192619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.192647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.192658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.196229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.196259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.196269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.199891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.199920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.199931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.203535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.203564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.203575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.207095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.207124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.207135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.210798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.210827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.210838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.214443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.214472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.214483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.218117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.218160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.218172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.221747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.221776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.221787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.225527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.225557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.229342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.229373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.229383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.233018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.233048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.233058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.236705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.236733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.236744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.240311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.240340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.240351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.243972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.244002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.244013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.247614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.247642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.247653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.251301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.548 [2024-08-11 21:01:46.251332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.548 [2024-08-11 21:01:46.251344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.548 [2024-08-11 21:01:46.255054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.255083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.255093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.258772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.258802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.258813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.262503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.262533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.262544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.266145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.266174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.266185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.269784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.269813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.269823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.273426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.273455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.273465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.277160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.277189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.277201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.280817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.280846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.280857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.284439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.284468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.284479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.288076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.288106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.288116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.291786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.291814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.291825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.295432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.295462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.295473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.299038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.299068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.299078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.302646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.302674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.302685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.306245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.306274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.306285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.309883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.309912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.309924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.313533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.313562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.313572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.317181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.317210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.317221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.549 [2024-08-11 21:01:46.320810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.549 [2024-08-11 21:01:46.320840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.549 [2024-08-11 21:01:46.320851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.324428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.324458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.324468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.328125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.328155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.328166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.331775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.331804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.331815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.335440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.335470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.335481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.339084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.339113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.339124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.342766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.342794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.342805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.346428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.346469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.350103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.350133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.350143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.353682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.353710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.353721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.357242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.357271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.810 [2024-08-11 21:01:46.357282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.810 [2024-08-11 21:01:46.360862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.810 [2024-08-11 21:01:46.360891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.360901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.364478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.364507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.364517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.368239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.368267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.368278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.371853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.371882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.371893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.375470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.375499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.375509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.379106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.379147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.382724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.382752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.382763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.386366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.386396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.386406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.390016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.390045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.390056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.393640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.393668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.393679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.397270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.397299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.397310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.400811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.400841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.400852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.404457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.404486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.404497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.408121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.408150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.408161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.411775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.411803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.411814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.415511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.415540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.415551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.419149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.419178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.419189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.422750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.422778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.422789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.426413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.426445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.426455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.430090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.430120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.430131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.433730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.433757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.433768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.437333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.437362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.437373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.440982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.441011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.441022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.444648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.444677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.444688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.448317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.448346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.448357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.451967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.451996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.452006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.455578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.455617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.455628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.459206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.459234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.811 [2024-08-11 21:01:46.459245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.811 [2024-08-11 21:01:46.462839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.811 [2024-08-11 21:01:46.462868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.462879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.466419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.466448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.466458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.470063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.470099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.470110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.473702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.473729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.473739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.477379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.477407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.477418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.481050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.481079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.481090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.484673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.484700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.484711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.488228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.488257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.488268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.491825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.491853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.491864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.495469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.495498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.495509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.499188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.499217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.499228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.502793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.502822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.502833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.506436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.506465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.506476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.510039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.510068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.510087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.513736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.513764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.513774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.517356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.517384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.517395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.520995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.521023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.521033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.524654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.524683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.524694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.528340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.528369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.528380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.532268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.532301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.532311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.536136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.536166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.536177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.539845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.539875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.539886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.543495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.543524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.543535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.547148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.547177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.547188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.550818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.550847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.550857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.554472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.554501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.554511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.558062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.558099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.558110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.561722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.812 [2024-08-11 21:01:46.561750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.812 [2024-08-11 21:01:46.561761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.812 [2024-08-11 21:01:46.565363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.813 [2024-08-11 21:01:46.565392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.813 [2024-08-11 21:01:46.565402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.813 [2024-08-11 21:01:46.569088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.813 [2024-08-11 21:01:46.569116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.813 [2024-08-11 21:01:46.569127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:35.813 [2024-08-11 21:01:46.572662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.813 [2024-08-11 21:01:46.572689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.813 [2024-08-11 21:01:46.572700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.813 [2024-08-11 21:01:46.576297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.813 [2024-08-11 21:01:46.576325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.813 [2024-08-11 21:01:46.576336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:35.813 [2024-08-11 21:01:46.579967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.813 [2024-08-11 21:01:46.579996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.813 [2024-08-11 21:01:46.580007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:35.813 [2024-08-11 21:01:46.583654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:35.813 [2024-08-11 21:01:46.583683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.813 [2024-08-11 21:01:46.583693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.073 [2024-08-11 21:01:46.587283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.073 [2024-08-11 21:01:46.587313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.073 [2024-08-11 21:01:46.587323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.073 [2024-08-11 21:01:46.590890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.073 [2024-08-11 21:01:46.590920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.073 [2024-08-11 21:01:46.590931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.594762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.594793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.594805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.599017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.599049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.599061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.603345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.603377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.603389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.607540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.607570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.607582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.611651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.611681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.611692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.615866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.615897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.615908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.620050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.620111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.620123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.624196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.624226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.624238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.628259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.628289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.628301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.632361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.632393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.632405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.636481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.636514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.636525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.640386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.640415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.640426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.644248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.644278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.644288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.648254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.648285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.648296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.652036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.652066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.652077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.655917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.655952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.655964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.659866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.659899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.659911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.663755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.663788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.663801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.667576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.667622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.667634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.671312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.671346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.671359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.675003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.675037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.675048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.678757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.678790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.678801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.682484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.683955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.683972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.687986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.688022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.688034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.691765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.691799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.691810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.695482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.695656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.695672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.699376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.699533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.074 [2024-08-11 21:01:46.699549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.074 [2024-08-11 21:01:46.703251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.074 [2024-08-11 21:01:46.703408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.703424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.707187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.707345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.707361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.711178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.711213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.711225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.714934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.714967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.714980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.718758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.718791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.718803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.722530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.722705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.722721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.726502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.726674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.726690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.730425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.730666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.734471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.734659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.734676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.738585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.738772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.738788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.742642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.742676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.742689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.746471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.746643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.746674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.750448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.750634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.750651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.754463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.754652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.754668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.758506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.758695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.758712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.762565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.762737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.762754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.766551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.766725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.766740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.770496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.770666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.770682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.774537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.774703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.774719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.778538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.778709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.778725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.782498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.782677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.782693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.786542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.786726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.786742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.790557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.790720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.790737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.794485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.794652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.794668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.798423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.798578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.798608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.802230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.802383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.802398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.806105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.806140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.806152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.809727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.075 [2024-08-11 21:01:46.809760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.075 [2024-08-11 21:01:46.809772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.075 [2024-08-11 21:01:46.813336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.813491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.813506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.817293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.817443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.817458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.821518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.821554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.821566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.825290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.825324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.825336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.829020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.829053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.829064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.832683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.832716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.832728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.836361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.836515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.836530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.840178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.840332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.840347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.844102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.844139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.844151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.076 [2024-08-11 21:01:46.847749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.076 [2024-08-11 21:01:46.847782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.076 [2024-08-11 21:01:46.847795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.851461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.851624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.851640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.855282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.855437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.855453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.859129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.859284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.859299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.863023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.863057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.863069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.866663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.866696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.866707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.870363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.870517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.870533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.874245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.874394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.874409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.878217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.878392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.878408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.882176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.882348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.882364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.886150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.886186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.886198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.889930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.889963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.889975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.893665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.336 [2024-08-11 21:01:46.893698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.336 [2024-08-11 21:01:46.893709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.336 [2024-08-11 21:01:46.897350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.897506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.897523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.901246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.901400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.901415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.905113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.905269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.905284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.908926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.908961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.908973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.912683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.912715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.912727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.916450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.916619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.916636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.920430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.920585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.920616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.924334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.924489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.924505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.928167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.928320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.928336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.932180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.932215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.932227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.935943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.935977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.935988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.939714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.939747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.939758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.943352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.943506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.943523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.947244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.947399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.947414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.951125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.951278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.951293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.954923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.954959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.954971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.958658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.958691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.958703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.962308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.962462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.962479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.966098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.966251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.966266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.969929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.969962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.969975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.973542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.973729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.973746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.977380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.977548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.977563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.981309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.981338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.981350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.985010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.985181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.985196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.989200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.989235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.989247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.992997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.993030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.993043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:46.996812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:46.996844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:46.996856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.000705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.000738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.000750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.004369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.004549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.004565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.008271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.008443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.008460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.012106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.012279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.012295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.016151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.016187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.016199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.019963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.019997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.020009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.023789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.023823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.023835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.027608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.027640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.027651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.031401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.031581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.031614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.035406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.035582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.035614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.039448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.039637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.039654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.043376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.043549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.047330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.047504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.047520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.051376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.051548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.051563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.055342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.055519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.055537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.059334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.059522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.059539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.063644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.063695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.063706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.067405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.067578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.067609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.071278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.071451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.071466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.075182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.075353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.075369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.079133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.079168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.079180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.082861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.082894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.082906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.086455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.086642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.086659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.090354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.090527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.090543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.094325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.094512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.094528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.098282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.337 [2024-08-11 21:01:47.098454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.337 [2024-08-11 21:01:47.098471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.337 [2024-08-11 21:01:47.102213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.338 [2024-08-11 21:01:47.102381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.338 [2024-08-11 21:01:47.102397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.338 [2024-08-11 21:01:47.106247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.338 [2024-08-11 21:01:47.106282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.338 [2024-08-11 21:01:47.106295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.338 [2024-08-11 21:01:47.109969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.338 [2024-08-11 21:01:47.110002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.338 [2024-08-11 21:01:47.110014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.113583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.113626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.113638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.117266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.117440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.117457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.121212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.121383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.121399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.125126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.125161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.125173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.128762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.128794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.128806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.132483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.132667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.132684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.136384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.136540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.136555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.140226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.140381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.140396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.144033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.144067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.144079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.147708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.147741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.151297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.151453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.151469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.155044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.155198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.155213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.158868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.158903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.158915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.162502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.162676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.162692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.166348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.166502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.166517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.170215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.170354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.170370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.174052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.174215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.174231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.177904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.177938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.177950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.181617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.181649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.181661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.185236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.185389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.185405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.189033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.189069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.189081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.192692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.192724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.192736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.196273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.196427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.196442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.200076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.200229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.200245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.203858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.203893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.203905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.207417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.207572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.207587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.598 [2024-08-11 21:01:47.211432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.598 [2024-08-11 21:01:47.211610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.598 [2024-08-11 21:01:47.211628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.215296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.215467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.215484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.219353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.219522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.219538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.223347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.223517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.223535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.227241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.227412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.227428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.231125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.231297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.235029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.235064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.235076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.238796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.238829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.238841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.242387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.242562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.242577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.246146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.246306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.246322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.250009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.250044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.250056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.253888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.253922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.253934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.257554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.257751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.257769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.261459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.261623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.261639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.265270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.265423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.265439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.269087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.269239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.269254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.272863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.272897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.272909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.276449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.276614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.276631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.280266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.280418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.280434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.284020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.284173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.284189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.287844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.287878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.287890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.291439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.291608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.291627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.295249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.295401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.295416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.299016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.299050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.299063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.302615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.302648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.302660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.306168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.306324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.306340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.310065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.310124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.310137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.313757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.313790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.599 [2024-08-11 21:01:47.313802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.599 [2024-08-11 21:01:47.317458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.599 [2024-08-11 21:01:47.317622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.317638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.321404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.321571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.321587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.325235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.325386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.325402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.329093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.329244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.329260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.332916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.332950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.332962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.336534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.336701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.336716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.340371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.340524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.340540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.344153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.344303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.344319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.347929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.347962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.347974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.351547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.351720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.351736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.355311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.355466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.355481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.359104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.359257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.359272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.362854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.362888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.362900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.366631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.366800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.366816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.600 [2024-08-11 21:01:47.370514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.600 [2024-08-11 21:01:47.370677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.600 [2024-08-11 21:01:47.370693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.860 [2024-08-11 21:01:47.374247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.860 [2024-08-11 21:01:47.374399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.374415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.378003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.378036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.378047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.381782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.381815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.381827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.385457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.385620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.385636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.389231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.389397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.393080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.393115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.393126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.396803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.396836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.396848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.400500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.400687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.400703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.404563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.404759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.408448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.408632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.408649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.412360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.412531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.412547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.416282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.416453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.416469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.420110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.420280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.420296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.423933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.423968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.423980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.427641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.427674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.427685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.431320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.431492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.431509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.435216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.435387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.435404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.439151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.439323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.439338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.443170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.443205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.443217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.446962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.446995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.447006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.450510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.450694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.450710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.454366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.454555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.454570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.458276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.458468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.458483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.462206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.462378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.462395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.466047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.466118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.466132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.469879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.469912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.469924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.473532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.473716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.473732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.477411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.861 [2024-08-11 21:01:47.477581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.861 [2024-08-11 21:01:47.477612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.861 [2024-08-11 21:01:47.481289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.481458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.481475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.485227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.485395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.485411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.489102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.489271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.489287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.493028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.493061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.493074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.496704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.496749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.500347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.500521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.500537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.504228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.504400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.504416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.508084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.508118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.508131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.511803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.511837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.511849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.515392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.515565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.515580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.519262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.519432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.519447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.523073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.523108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.523120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.526648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.526681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.526693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.530334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.530507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.530523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.534443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.534623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.534639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.538381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.538570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.538585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.542351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.542524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.542539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.546225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.546400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.546431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.550150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.550293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.550309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.554032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.554067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.554087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.557755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.557788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.557800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.561472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.561654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.561670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.565293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.565458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.565474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.569149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.569319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.569335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.573088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.573123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.573135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.576772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.576805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.576817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.580499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.580685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.580701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.862 [2024-08-11 21:01:47.584322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.862 [2024-08-11 21:01:47.584493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.862 [2024-08-11 21:01:47.584508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.588281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.588452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.588468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.592202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.592373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.592389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.596083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.596117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.596129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.599791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.599824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.599836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.603388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.603561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.603577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.607274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.607440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.607456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.611152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.611324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.611340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:36.863 [2024-08-11 21:01:47.615087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x623400) 00:20:36.863 [2024-08-11 21:01:47.615122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.863 [2024-08-11 21:01:47.615135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:36.863 00:20:36.863 Latency(us) 00:20:36.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.863 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:36.863 nvme0n1 : 2.00 8229.82 1028.73 0.00 0.00 1941.32 1683.08 9234.62 00:20:36.863 =================================================================================================================== 00:20:36.863 Total : 8229.82 1028.73 0.00 0.00 1941.32 1683.08 9234.62 00:20:36.863 0 00:20:37.122 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:37.122 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:37.122 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:37.122 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:37.122 | .driver_specific 00:20:37.122 | .nvme_error 00:20:37.122 | .status_code 00:20:37.122 | .command_transient_transport_error' 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 531 > 0 )) 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92749 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 92749 ']' 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 92749 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92749 00:20:37.381 killing process with pid 92749 00:20:37.381 Received shutdown signal, test time was about 2.000000 seconds 00:20:37.381 00:20:37.381 Latency(us) 00:20:37.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.381 =================================================================================================================== 00:20:37.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92749' 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 92749 00:20:37.381 21:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 92749 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92802 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92802 /var/tmp/bperf.sock 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:37.639 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 92802 ']' 00:20:37.640 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:37.640 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.640 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:37.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:37.640 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.640 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:37.640 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:37.640 [2024-08-11 21:01:48.233965] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:37.640 [2024-08-11 21:01:48.234377] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92802 ] 00:20:37.640 [2024-08-11 21:01:48.371328] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.898 [2024-08-11 21:01:48.440364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.898 [2024-08-11 21:01:48.490922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:37.898 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:37.898 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:37.898 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:37.898 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:38.157 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:38.157 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:38.157 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.157 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:38.157 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.157 21:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.416 nvme0n1 00:20:38.416 21:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:38.416 21:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:38.416 21:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.416 21:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:38.416 21:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:38.417 21:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:38.676 Running I/O for 2 seconds... 00:20:38.676 [2024-08-11 21:01:49.232625] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fef90 00:20:38.676 [2024-08-11 21:01:49.234888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.234928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.246020] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fe720 00:20:38.676 [2024-08-11 21:01:49.248071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.248103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.259258] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fdeb0 00:20:38.676 [2024-08-11 21:01:49.261293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.262760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.273973] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fd640 00:20:38.676 [2024-08-11 21:01:49.275998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.276031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.287169] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fcdd0 00:20:38.676 [2024-08-11 21:01:49.289168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.289197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.300358] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fc560 00:20:38.676 [2024-08-11 21:01:49.302362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.302527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.313745] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fbcf0 00:20:38.676 [2024-08-11 21:01:49.315752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.315783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.326982] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fb480 00:20:38.676 [2024-08-11 21:01:49.328922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.329081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.340466] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fac10 00:20:38.676 [2024-08-11 21:01:49.342603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.342634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.353850] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fa3a0 00:20:38.676 [2024-08-11 21:01:49.355779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.355933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:38.676 [2024-08-11 21:01:49.367340] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f9b30 00:20:38.676 [2024-08-11 21:01:49.369258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.676 [2024-08-11 21:01:49.369290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:38.677 [2024-08-11 21:01:49.380523] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f92c0 00:20:38.677 [2024-08-11 21:01:49.382502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.677 [2024-08-11 21:01:49.382534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:38.677 [2024-08-11 21:01:49.393819] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f8a50 00:20:38.677 [2024-08-11 21:01:49.395724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.677 [2024-08-11 21:01:49.395754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:38.677 [2024-08-11 21:01:49.407053] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f81e0 00:20:38.677 [2024-08-11 21:01:49.409170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.677 [2024-08-11 21:01:49.409200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:38.677 [2024-08-11 21:01:49.420529] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f7970 00:20:38.677 [2024-08-11 21:01:49.422466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.677 [2024-08-11 21:01:49.422497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:38.677 [2024-08-11 21:01:49.433807] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f7100 00:20:38.677 [2024-08-11 21:01:49.435630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.677 [2024-08-11 21:01:49.435660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:38.677 [2024-08-11 21:01:49.446985] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f6890 00:20:38.677 [2024-08-11 21:01:49.448788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.677 [2024-08-11 21:01:49.448955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:38.936 [2024-08-11 21:01:49.460355] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f6020 00:20:38.936 [2024-08-11 21:01:49.462254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.936 [2024-08-11 21:01:49.462286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:38.936 [2024-08-11 21:01:49.473720] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f57b0 00:20:38.936 [2024-08-11 21:01:49.475491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.936 [2024-08-11 21:01:49.475522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:38.936 [2024-08-11 21:01:49.487040] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f4f40 00:20:38.936 [2024-08-11 21:01:49.488794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.936 [2024-08-11 21:01:49.488823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:38.936 [2024-08-11 21:01:49.500212] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f46d0 00:20:38.936 [2024-08-11 21:01:49.502143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.936 [2024-08-11 21:01:49.502175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:38.936 [2024-08-11 21:01:49.513617] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f3e60 00:20:38.936 [2024-08-11 21:01:49.515344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.515502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.526992] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f35f0 00:20:38.937 [2024-08-11 21:01:49.528961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.528992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.540440] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f2d80 00:20:38.937 [2024-08-11 21:01:49.542142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.542298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.553823] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f2510 00:20:38.937 [2024-08-11 21:01:49.555689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.555721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.567193] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f1ca0 00:20:38.937 [2024-08-11 21:01:49.568863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.569016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.580541] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f1430 00:20:38.937 [2024-08-11 21:01:49.582296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.582328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.593843] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f0bc0 00:20:38.937 [2024-08-11 21:01:49.595466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.595498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.607091] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f0350 00:20:38.937 [2024-08-11 21:01:49.608702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.608734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.620232] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190efae0 00:20:38.937 [2024-08-11 21:01:49.622008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.622039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.633600] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ef270 00:20:38.937 [2024-08-11 21:01:49.635175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.635334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.646943] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190eea00 00:20:38.937 [2024-08-11 21:01:49.648682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.648713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.660334] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ee190 00:20:38.937 [2024-08-11 21:01:49.661896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.662051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.673888] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ed920 00:20:38.937 [2024-08-11 21:01:49.675719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.675750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.688104] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ed0b0 00:20:38.937 [2024-08-11 21:01:49.689680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.689710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:38.937 [2024-08-11 21:01:49.701823] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ec840 00:20:38.937 [2024-08-11 21:01:49.703323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.937 [2024-08-11 21:01:49.703355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.715105] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ebfd0 00:20:39.197 [2024-08-11 21:01:49.716577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.716618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.728617] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190eb760 00:20:39.197 [2024-08-11 21:01:49.730084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.730115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.741954] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190eaef0 00:20:39.197 [2024-08-11 21:01:49.743433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.743604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.755403] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ea680 00:20:39.197 [2024-08-11 21:01:49.757018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.757051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.768811] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e9e10 00:20:39.197 [2024-08-11 21:01:49.770248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.770279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.782011] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e95a0 00:20:39.197 [2024-08-11 21:01:49.783585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.783625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.795416] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e8d30 00:20:39.197 [2024-08-11 21:01:49.796812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.796842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.808615] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e84c0 00:20:39.197 [2024-08-11 21:01:49.809989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.822087] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e7c50 00:20:39.197 [2024-08-11 21:01:49.823435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.823468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.835504] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e73e0 00:20:39.197 [2024-08-11 21:01:49.836980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.837011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.848833] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e6b70 00:20:39.197 [2024-08-11 21:01:49.850158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.850191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.862031] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e6300 00:20:39.197 [2024-08-11 21:01:49.863342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.863503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.875436] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e5a90 00:20:39.197 [2024-08-11 21:01:49.876911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.876944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.888859] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e5220 00:20:39.197 [2024-08-11 21:01:49.890138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.890295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.902340] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e49b0 00:20:39.197 [2024-08-11 21:01:49.903770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.903803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.915698] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e4140 00:20:39.197 [2024-08-11 21:01:49.916937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.917094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.929338] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e38d0 00:20:39.197 [2024-08-11 21:01:49.930835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.930863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.942832] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e3060 00:20:39.197 [2024-08-11 21:01:49.944039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.944197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.956392] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e27f0 00:20:39.197 [2024-08-11 21:01:49.957852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.957880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:39.197 [2024-08-11 21:01:49.969952] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e1f80 00:20:39.197 [2024-08-11 21:01:49.971142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.197 [2024-08-11 21:01:49.971175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:39.456 [2024-08-11 21:01:49.983138] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e1710 00:20:39.457 [2024-08-11 21:01:49.984468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:49.984492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:49.996566] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e0ea0 00:20:39.457 [2024-08-11 21:01:49.997720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:49.997874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.009916] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e0630 00:20:39.457 [2024-08-11 21:01:50.011278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.011325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.023524] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190dfdc0 00:20:39.457 [2024-08-11 21:01:50.024758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.036828] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190df550 00:20:39.457 [2024-08-11 21:01:50.037923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.037947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.049974] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190dece0 00:20:39.457 [2024-08-11 21:01:50.051081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.051106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.063187] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190de470 00:20:39.457 [2024-08-11 21:01:50.064252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.064276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.082007] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ddc00 00:20:39.457 [2024-08-11 21:01:50.084084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.084244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.095343] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190de470 00:20:39.457 [2024-08-11 21:01:50.097580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.097624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.108744] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190dece0 00:20:39.457 [2024-08-11 21:01:50.110783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.122061] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190df550 00:20:39.457 [2024-08-11 21:01:50.124343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.124373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.135546] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190dfdc0 00:20:39.457 [2024-08-11 21:01:50.137682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.137713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.148906] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e0630 00:20:39.457 [2024-08-11 21:01:50.150917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.150950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.162116] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e0ea0 00:20:39.457 [2024-08-11 21:01:50.164081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.175307] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e1710 00:20:39.457 [2024-08-11 21:01:50.177277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.177436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.189327] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e1f80 00:20:39.457 [2024-08-11 21:01:50.191444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.191477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.203675] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e27f0 00:20:39.457 [2024-08-11 21:01:50.205686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.205866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.217504] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e3060 00:20:39.457 [2024-08-11 21:01:50.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.457 [2024-08-11 21:01:50.219815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.457 [2024-08-11 21:01:50.231465] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e38d0 00:20:39.716 [2024-08-11 21:01:50.233420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.716 [2024-08-11 21:01:50.233577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:39.716 [2024-08-11 21:01:50.245472] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e4140 00:20:39.716 [2024-08-11 21:01:50.247423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.247455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.259092] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e49b0 00:20:39.717 [2024-08-11 21:01:50.261009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.261167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.272860] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e5220 00:20:39.717 [2024-08-11 21:01:50.274778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.274811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.286593] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e5a90 00:20:39.717 [2024-08-11 21:01:50.288824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.288856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.301139] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e6300 00:20:39.717 [2024-08-11 21:01:50.303039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.303197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.315499] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e6b70 00:20:39.717 [2024-08-11 21:01:50.317394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.317426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.329289] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e73e0 00:20:39.717 [2024-08-11 21:01:50.331150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.331182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.343113] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e7c50 00:20:39.717 [2024-08-11 21:01:50.344981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.345013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.356908] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e84c0 00:20:39.717 [2024-08-11 21:01:50.358713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.358744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.370889] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e8d30 00:20:39.717 [2024-08-11 21:01:50.372691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.372722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.384724] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e95a0 00:20:39.717 [2024-08-11 21:01:50.386440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.386473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.398014] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190e9e10 00:20:39.717 [2024-08-11 21:01:50.399718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.399749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.411218] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ea680 00:20:39.717 [2024-08-11 21:01:50.413077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.413108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.424643] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190eaef0 00:20:39.717 [2024-08-11 21:01:50.426327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.426358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.437915] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190eb760 00:20:39.717 [2024-08-11 21:01:50.439831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.439861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.451575] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ebfd0 00:20:39.717 [2024-08-11 21:01:50.453301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.453331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.464826] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ec840 00:20:39.717 [2024-08-11 21:01:50.466440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.466472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.478111] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ed0b0 00:20:39.717 [2024-08-11 21:01:50.479716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.479746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.717 [2024-08-11 21:01:50.491306] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ed920 00:20:39.717 [2024-08-11 21:01:50.493066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.717 [2024-08-11 21:01:50.493223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.504850] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ee190 00:20:39.977 [2024-08-11 21:01:50.506421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.506453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.518130] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190eea00 00:20:39.977 [2024-08-11 21:01:50.519853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.519885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.531527] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190ef270 00:20:39.977 [2024-08-11 21:01:50.533160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.533189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.544832] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190efae0 00:20:39.977 [2024-08-11 21:01:50.546355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.546512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.558172] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f0350 00:20:39.977 [2024-08-11 21:01:50.559681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.559712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.571312] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f0bc0 00:20:39.977 [2024-08-11 21:01:50.572912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.572942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.584572] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f1430 00:20:39.977 [2024-08-11 21:01:50.586061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.586100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.597801] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f1ca0 00:20:39.977 [2024-08-11 21:01:50.599260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.599291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.610969] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f2510 00:20:39.977 [2024-08-11 21:01:50.612680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.612709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.624444] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f2d80 00:20:39.977 [2024-08-11 21:01:50.625972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.626002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.637691] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f35f0 00:20:39.977 [2024-08-11 21:01:50.639098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.639255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.651070] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f3e60 00:20:39.977 [2024-08-11 21:01:50.652458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.652491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.664305] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f46d0 00:20:39.977 [2024-08-11 21:01:50.665703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.665734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.677518] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f4f40 00:20:39.977 [2024-08-11 21:01:50.678993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.679022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.690800] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f57b0 00:20:39.977 [2024-08-11 21:01:50.692139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.692170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.704342] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f6020 00:20:39.977 [2024-08-11 21:01:50.706099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.706131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.718785] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f6890 00:20:39.977 [2024-08-11 21:01:50.720218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.720261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.732526] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f7100 00:20:39.977 [2024-08-11 21:01:50.733859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.733886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.977 [2024-08-11 21:01:50.745974] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f7970 00:20:39.977 [2024-08-11 21:01:50.747304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.977 [2024-08-11 21:01:50.747330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.759354] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f81e0 00:20:40.237 [2024-08-11 21:01:50.760648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.760673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.772822] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f8a50 00:20:40.237 [2024-08-11 21:01:50.774096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.774123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.786157] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f92c0 00:20:40.237 [2024-08-11 21:01:50.787383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.787409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.799407] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f9b30 00:20:40.237 [2024-08-11 21:01:50.800628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.800662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.812636] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fa3a0 00:20:40.237 [2024-08-11 21:01:50.813846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.813872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.825874] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fac10 00:20:40.237 [2024-08-11 21:01:50.827057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.827083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.839112] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fb480 00:20:40.237 [2024-08-11 21:01:50.840271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.840305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.852409] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fbcf0 00:20:40.237 [2024-08-11 21:01:50.853550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.853577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.865517] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fc560 00:20:40.237 [2024-08-11 21:01:50.866663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.866687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.878668] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fcdd0 00:20:40.237 [2024-08-11 21:01:50.879776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.879800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.891958] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fd640 00:20:40.237 [2024-08-11 21:01:50.893050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.893074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.905234] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fdeb0 00:20:40.237 [2024-08-11 21:01:50.906321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.906347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.918568] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fe720 00:20:40.237 [2024-08-11 21:01:50.919730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.919759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.931958] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fef90 00:20:40.237 [2024-08-11 21:01:50.933048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.933074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.950996] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fef90 00:20:40.237 [2024-08-11 21:01:50.953146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.953173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.964413] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fe720 00:20:40.237 [2024-08-11 21:01:50.966496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.237 [2024-08-11 21:01:50.966522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:40.237 [2024-08-11 21:01:50.977583] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fdeb0 00:20:40.238 [2024-08-11 21:01:50.979608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.238 [2024-08-11 21:01:50.979633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:40.238 [2024-08-11 21:01:50.990741] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fd640 00:20:40.238 [2024-08-11 21:01:50.992739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.238 [2024-08-11 21:01:50.992764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:40.238 [2024-08-11 21:01:51.003866] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fcdd0 00:20:40.238 [2024-08-11 21:01:51.005849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.238 [2024-08-11 21:01:51.005876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.017067] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fc560 00:20:40.497 [2024-08-11 21:01:51.019042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.019068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.030223] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fbcf0 00:20:40.497 [2024-08-11 21:01:51.032172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.032197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.043363] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fb480 00:20:40.497 [2024-08-11 21:01:51.045294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.045320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.056544] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fac10 00:20:40.497 [2024-08-11 21:01:51.058480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.058507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.069693] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190fa3a0 00:20:40.497 [2024-08-11 21:01:51.071604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.071628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.082817] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f9b30 00:20:40.497 [2024-08-11 21:01:51.084704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.084729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.095987] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f92c0 00:20:40.497 [2024-08-11 21:01:51.097861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.097887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.109203] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f8a50 00:20:40.497 [2024-08-11 21:01:51.111072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.111099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.122471] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f81e0 00:20:40.497 [2024-08-11 21:01:51.124316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.124342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.135639] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f7970 00:20:40.497 [2024-08-11 21:01:51.137453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.137479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.148909] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f7100 00:20:40.497 [2024-08-11 21:01:51.150728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.150755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.497 [2024-08-11 21:01:51.162053] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f6890 00:20:40.497 [2024-08-11 21:01:51.163854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.497 [2024-08-11 21:01:51.163879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:40.498 [2024-08-11 21:01:51.175214] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f6020 00:20:40.498 [2024-08-11 21:01:51.176991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.498 [2024-08-11 21:01:51.177017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:40.498 [2024-08-11 21:01:51.188372] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f57b0 00:20:40.498 [2024-08-11 21:01:51.190146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.498 [2024-08-11 21:01:51.190173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:40.498 [2024-08-11 21:01:51.201540] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f4f40 00:20:40.498 [2024-08-11 21:01:51.203293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.498 [2024-08-11 21:01:51.203319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:40.498 [2024-08-11 21:01:51.214721] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20164d0) with pdu=0x2000190f46d0 00:20:40.498 [2024-08-11 21:01:51.216439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.498 [2024-08-11 21:01:51.216466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:40.498 00:20:40.498 Latency(us) 00:20:40.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.498 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:40.498 nvme0n1 : 2.00 18885.46 73.77 0.00 0.00 6772.25 6166.34 25499.46 00:20:40.498 =================================================================================================================== 00:20:40.498 Total : 18885.46 73.77 0.00 0.00 6772.25 6166.34 25499.46 00:20:40.498 0 00:20:40.498 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:40.498 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:40.498 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:40.498 | .driver_specific 00:20:40.498 | .nvme_error 00:20:40.498 | .status_code 00:20:40.498 | .command_transient_transport_error' 00:20:40.498 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92802 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 92802 ']' 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 92802 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92802 00:20:41.065 killing process with pid 92802 00:20:41.065 Received shutdown signal, test time was about 2.000000 seconds 00:20:41.065 00:20:41.065 Latency(us) 00:20:41.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.065 =================================================================================================================== 00:20:41.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92802' 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 92802 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 92802 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92855 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92855 /var/tmp/bperf.sock 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 92855 ']' 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:41.065 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:41.066 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:41.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:41.066 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:41.066 21:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:41.066 [2024-08-11 21:01:51.813305] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:41.066 [2024-08-11 21:01:51.813379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92855 ] 00:20:41.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:41.066 Zero copy mechanism will not be used. 00:20:41.324 [2024-08-11 21:01:51.942185] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.324 [2024-08-11 21:01:52.016429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.324 [2024-08-11 21:01:52.068765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.583 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.583 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:20:41.583 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:41.583 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:41.843 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:41.843 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:41.843 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.843 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:41.843 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.843 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.102 nvme0n1 00:20:42.102 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:42.102 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@557 -- # xtrace_disable 00:20:42.102 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.102 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:20:42.102 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:42.102 21:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:42.102 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.102 Zero copy mechanism will not be used. 00:20:42.102 Running I/O for 2 seconds... 00:20:42.362 [2024-08-11 21:01:52.886229] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.886497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.886524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.890786] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.891033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.891061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.895334] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.895583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.895619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.899856] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.900101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.900127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.904381] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.904643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.904670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.908921] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.909167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.909193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.913432] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.913692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.913712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.917973] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.918230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.918250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.922470] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.922728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.922754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.926995] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.927240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.927267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.931520] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.931778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.931804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.936055] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.936300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.936326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.940562] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.940820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.940846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.945107] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.945354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.945380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.949634] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.362 [2024-08-11 21:01:52.949880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.362 [2024-08-11 21:01:52.949905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.362 [2024-08-11 21:01:52.954160] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.954406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.954431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.958694] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.958943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.958968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.963217] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.963462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.963487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.967749] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.967997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.968022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.972409] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.972664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.972689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.976912] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.977159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.977179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.981465] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.981723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.981744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.986002] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.986278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.986304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.990576] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.990837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.990857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.995102] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.995348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.995367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:52.999638] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:52.999882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:52.999909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.004161] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.004411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.004437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.008700] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.008943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.008968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.013259] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.013505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.013531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.017750] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.017998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.018023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.022292] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.022538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.022564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.026798] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.027045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.027065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.031368] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.031627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.031647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.035864] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.036109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.036135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.040380] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.040637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.040662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.044908] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.045155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.045175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.049415] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.049673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.049693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.053908] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.054164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.054190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.058411] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.058670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.058690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.062936] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.063181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.063201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.067438] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.067696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.071957] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.072203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.363 [2024-08-11 21:01:53.072223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.363 [2024-08-11 21:01:53.076441] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.363 [2024-08-11 21:01:53.076699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.076735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.080957] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.081204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.081230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.085496] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.085752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.085774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.089924] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.090194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.090215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.094422] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.094682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.094703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.098959] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.099203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.099224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.103474] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.103733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.103758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.108016] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.108261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.108286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.112485] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.112743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.112763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.116996] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.117240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.117260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.121501] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.121757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.121782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.126027] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.126280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.126305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.130474] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.130730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.130750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.364 [2024-08-11 21:01:53.134991] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.364 [2024-08-11 21:01:53.135240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.364 [2024-08-11 21:01:53.135260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.632 [2024-08-11 21:01:53.139473] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.632 [2024-08-11 21:01:53.139734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.632 [2024-08-11 21:01:53.139755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.632 [2024-08-11 21:01:53.143968] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.632 [2024-08-11 21:01:53.144215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.632 [2024-08-11 21:01:53.144234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.632 [2024-08-11 21:01:53.148455] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.632 [2024-08-11 21:01:53.148715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.632 [2024-08-11 21:01:53.148740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.632 [2024-08-11 21:01:53.152976] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.632 [2024-08-11 21:01:53.153221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.632 [2024-08-11 21:01:53.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.632 [2024-08-11 21:01:53.157431] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.633 [2024-08-11 21:01:53.157687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.633 [2024-08-11 21:01:53.157707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.633 [2024-08-11 21:01:53.161930] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.633 [2024-08-11 21:01:53.162183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.633 [2024-08-11 21:01:53.162203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.633 [2024-08-11 21:01:53.166419] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.633 [2024-08-11 21:01:53.166676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.633 [2024-08-11 21:01:53.166696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.633 [2024-08-11 21:01:53.170900] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.633 [2024-08-11 21:01:53.171148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.633 [2024-08-11 21:01:53.171167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.633 [2024-08-11 21:01:53.175395] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.633 [2024-08-11 21:01:53.175653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.634 [2024-08-11 21:01:53.175673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.634 [2024-08-11 21:01:53.179887] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.634 [2024-08-11 21:01:53.180131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.634 [2024-08-11 21:01:53.180151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.634 [2024-08-11 21:01:53.184399] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.634 [2024-08-11 21:01:53.184656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.634 [2024-08-11 21:01:53.184681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.634 [2024-08-11 21:01:53.188978] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.634 [2024-08-11 21:01:53.189246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.634 [2024-08-11 21:01:53.189273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.635 [2024-08-11 21:01:53.193519] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.635 [2024-08-11 21:01:53.193776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.635 [2024-08-11 21:01:53.193797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.635 [2024-08-11 21:01:53.198038] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.635 [2024-08-11 21:01:53.198295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.635 [2024-08-11 21:01:53.198315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.635 [2024-08-11 21:01:53.202517] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.635 [2024-08-11 21:01:53.202773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.635 [2024-08-11 21:01:53.202798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.636 [2024-08-11 21:01:53.207040] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.636 [2024-08-11 21:01:53.207286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.636 [2024-08-11 21:01:53.207312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.636 [2024-08-11 21:01:53.211632] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.636 [2024-08-11 21:01:53.211900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.636 [2024-08-11 21:01:53.211926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.636 [2024-08-11 21:01:53.216223] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.636 [2024-08-11 21:01:53.216469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.636 [2024-08-11 21:01:53.216489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.636 [2024-08-11 21:01:53.220724] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.636 [2024-08-11 21:01:53.220968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.636 [2024-08-11 21:01:53.220988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.637 [2024-08-11 21:01:53.225214] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.637 [2024-08-11 21:01:53.225461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.637 [2024-08-11 21:01:53.225487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.637 [2024-08-11 21:01:53.229740] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.637 [2024-08-11 21:01:53.229984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.637 [2024-08-11 21:01:53.230009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.637 [2024-08-11 21:01:53.234254] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.637 [2024-08-11 21:01:53.234500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.637 [2024-08-11 21:01:53.234526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.637 [2024-08-11 21:01:53.238764] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.638 [2024-08-11 21:01:53.239008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.638 [2024-08-11 21:01:53.239034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.638 [2024-08-11 21:01:53.243236] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.638 [2024-08-11 21:01:53.243481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.638 [2024-08-11 21:01:53.243507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.638 [2024-08-11 21:01:53.247726] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.638 [2024-08-11 21:01:53.247973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.638 [2024-08-11 21:01:53.247998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.638 [2024-08-11 21:01:53.252209] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.638 [2024-08-11 21:01:53.252453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.639 [2024-08-11 21:01:53.252479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.639 [2024-08-11 21:01:53.256712] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.639 [2024-08-11 21:01:53.256958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.639 [2024-08-11 21:01:53.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.639 [2024-08-11 21:01:53.261254] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.639 [2024-08-11 21:01:53.261501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.639 [2024-08-11 21:01:53.261526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.639 [2024-08-11 21:01:53.265744] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.639 [2024-08-11 21:01:53.266028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.640 [2024-08-11 21:01:53.266053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.640 [2024-08-11 21:01:53.270244] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.640 [2024-08-11 21:01:53.270488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.640 [2024-08-11 21:01:53.270513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.640 [2024-08-11 21:01:53.274786] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.640 [2024-08-11 21:01:53.275032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.640 [2024-08-11 21:01:53.275053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.640 [2024-08-11 21:01:53.279306] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.640 [2024-08-11 21:01:53.279551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.641 [2024-08-11 21:01:53.279571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.641 [2024-08-11 21:01:53.284008] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.641 [2024-08-11 21:01:53.284254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.641 [2024-08-11 21:01:53.284280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.641 [2024-08-11 21:01:53.288521] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.641 [2024-08-11 21:01:53.288776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.641 [2024-08-11 21:01:53.288801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.641 [2024-08-11 21:01:53.293013] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.641 [2024-08-11 21:01:53.293259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.642 [2024-08-11 21:01:53.293286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.642 [2024-08-11 21:01:53.297581] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.642 [2024-08-11 21:01:53.297840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.642 [2024-08-11 21:01:53.297864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.642 [2024-08-11 21:01:53.302124] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.642 [2024-08-11 21:01:53.302371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.642 [2024-08-11 21:01:53.302396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.642 [2024-08-11 21:01:53.306648] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.642 [2024-08-11 21:01:53.306893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.643 [2024-08-11 21:01:53.306919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.643 [2024-08-11 21:01:53.311146] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.643 [2024-08-11 21:01:53.311390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.643 [2024-08-11 21:01:53.311415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.643 [2024-08-11 21:01:53.315657] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.643 [2024-08-11 21:01:53.315901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.643 [2024-08-11 21:01:53.315926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.643 [2024-08-11 21:01:53.320170] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.643 [2024-08-11 21:01:53.320431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.643 [2024-08-11 21:01:53.320456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.643 [2024-08-11 21:01:53.324683] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.644 [2024-08-11 21:01:53.324927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.644 [2024-08-11 21:01:53.324952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.644 [2024-08-11 21:01:53.329162] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.644 [2024-08-11 21:01:53.329406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.644 [2024-08-11 21:01:53.329431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.644 [2024-08-11 21:01:53.333760] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.644 [2024-08-11 21:01:53.334008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.644 [2024-08-11 21:01:53.334033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.644 [2024-08-11 21:01:53.338264] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.644 [2024-08-11 21:01:53.338508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.644 [2024-08-11 21:01:53.338533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.644 [2024-08-11 21:01:53.342742] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.644 [2024-08-11 21:01:53.342988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.644 [2024-08-11 21:01:53.343013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.644 [2024-08-11 21:01:53.347251] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.645 [2024-08-11 21:01:53.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.645 [2024-08-11 21:01:53.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.645 [2024-08-11 21:01:53.351780] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.645 [2024-08-11 21:01:53.352024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.645 [2024-08-11 21:01:53.352049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.645 [2024-08-11 21:01:53.356273] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.645 [2024-08-11 21:01:53.356517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.645 [2024-08-11 21:01:53.356543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.645 [2024-08-11 21:01:53.360798] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.645 [2024-08-11 21:01:53.361042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.646 [2024-08-11 21:01:53.361062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.646 [2024-08-11 21:01:53.365313] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.646 [2024-08-11 21:01:53.365560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.646 [2024-08-11 21:01:53.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.646 [2024-08-11 21:01:53.369867] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.646 [2024-08-11 21:01:53.370135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.646 [2024-08-11 21:01:53.370160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.646 [2024-08-11 21:01:53.374397] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.646 [2024-08-11 21:01:53.374654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.646 [2024-08-11 21:01:53.374679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.646 [2024-08-11 21:01:53.378933] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.646 [2024-08-11 21:01:53.379177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.646 [2024-08-11 21:01:53.379202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.646 [2024-08-11 21:01:53.383404] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.646 [2024-08-11 21:01:53.383663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.647 [2024-08-11 21:01:53.383683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.647 [2024-08-11 21:01:53.387953] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.647 [2024-08-11 21:01:53.388197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.647 [2024-08-11 21:01:53.388216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.647 [2024-08-11 21:01:53.392437] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.647 [2024-08-11 21:01:53.392696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.647 [2024-08-11 21:01:53.392716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.647 [2024-08-11 21:01:53.396958] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.647 [2024-08-11 21:01:53.397205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.647 [2024-08-11 21:01:53.397225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.647 [2024-08-11 21:01:53.401456] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.647 [2024-08-11 21:01:53.401713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.647 [2024-08-11 21:01:53.401738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.908 [2024-08-11 21:01:53.406118] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.908 [2024-08-11 21:01:53.406366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.908 [2024-08-11 21:01:53.406392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.908 [2024-08-11 21:01:53.410588] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.908 [2024-08-11 21:01:53.410848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.908 [2024-08-11 21:01:53.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.908 [2024-08-11 21:01:53.415146] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.415391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.415411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.419670] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.419914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.424171] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.424414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.424440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.428677] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.428921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.428946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.433202] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.433447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.433474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.437708] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.437952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.437977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.442196] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.442440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.442464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.446728] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.446976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.447001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.451219] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.451468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.451493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.455712] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.455956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.455980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.460150] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.460395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.460420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.464659] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.464905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.464930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.469143] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.469388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.469414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.473687] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.473933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.473958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.478217] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.478464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.478490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.482697] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.482940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.482965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.487212] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.487484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.491810] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.492060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.492086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.496340] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.496585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.496622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.500791] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.501037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.501063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.505310] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.505555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.505580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.509833] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.510086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.510111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.514381] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.514637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.514657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.518887] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.519130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.519150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.523387] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.523645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.523665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.527901] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.528146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.528166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.532363] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.532618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.532639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.909 [2024-08-11 21:01:53.536817] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.909 [2024-08-11 21:01:53.537062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.909 [2024-08-11 21:01:53.537082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.541327] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.541574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.541611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.545854] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.546108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.546133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.550355] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.550613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.550638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.554846] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.555091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.555116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.559346] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.559603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.559628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.563857] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.564102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.564127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.568413] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.568670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.568690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.572847] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.573090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.573110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.578021] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.578328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.578356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.583247] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.583540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.583569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.588525] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.588825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.588854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.593916] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.594245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.598707] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.598955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.598979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.603213] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.603458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.603484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.607730] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.607974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.607999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.612306] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.612551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.612577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.616848] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.617093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.617119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.621346] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.621601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.621626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.625941] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.626215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.626241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.630420] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.630682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.630706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.634956] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.635202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.635227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.639448] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.639703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.639728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.643975] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.644219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.644244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.648495] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.648752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.648777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.653006] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.653254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.653279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.657482] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.657739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.657765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.662007] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.662261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.910 [2024-08-11 21:01:53.662286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.910 [2024-08-11 21:01:53.666521] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.910 [2024-08-11 21:01:53.666778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.911 [2024-08-11 21:01:53.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:42.911 [2024-08-11 21:01:53.671030] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.911 [2024-08-11 21:01:53.671275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.911 [2024-08-11 21:01:53.671300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:42.911 [2024-08-11 21:01:53.675556] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.911 [2024-08-11 21:01:53.675816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.911 [2024-08-11 21:01:53.675841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:42.911 [2024-08-11 21:01:53.680041] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:42.911 [2024-08-11 21:01:53.680286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.911 [2024-08-11 21:01:53.680310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:42.911 [2024-08-11 21:01:53.684554] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.684813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.684838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.689090] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.689338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.689363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.693574] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.693833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.693858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.698106] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.698350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.698375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.702581] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.702839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.702863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.707102] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.707373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.711585] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.711843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.711868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.716165] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.716409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.716435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.720650] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.720898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.720923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.725147] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.725391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.725417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.729680] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.729926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.729950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.734160] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.734403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.734428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.738677] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.738943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.738968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.743170] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.743415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.743440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.747637] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.747882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.747906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.752139] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.752398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.752423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.756852] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.757116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.757141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.761641] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.761932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.761958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.766452] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.766722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.766748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.771215] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.771465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.771491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.776019] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.776272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.776298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.780849] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.781127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.781153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.785679] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.785936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.785973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.790505] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.790786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.790812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.795216] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.795471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.795497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.799852] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.171 [2024-08-11 21:01:53.800103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.171 [2024-08-11 21:01:53.800129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.171 [2024-08-11 21:01:53.804431] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.804692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.804717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.809006] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.809259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.809285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.813606] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.813857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.813883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.818247] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.818520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.818546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.822866] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.823116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.823141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.827538] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.827802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.827827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.832111] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.832388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.836704] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.836954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.836979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.841278] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.841531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.841557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.845907] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.846165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.846190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.850528] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.850793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.850813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.855216] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.855492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.855518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.859870] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.860125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.860151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.864441] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.864705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.864730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.869059] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.869309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.869335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.873674] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.873923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.873948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.878282] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.878535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.878560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.882900] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.883153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.883178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.887682] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.887936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.887956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.892281] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.892532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.892552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.896911] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.897161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.897186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.901508] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.901772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.901797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.906131] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.906381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.906407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.910755] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.911005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.911029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.915367] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.915640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.915665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.920016] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.920268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.920293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.924641] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.924890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.172 [2024-08-11 21:01:53.924910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.172 [2024-08-11 21:01:53.929359] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.172 [2024-08-11 21:01:53.929621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.173 [2024-08-11 21:01:53.929662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.173 [2024-08-11 21:01:53.934107] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.173 [2024-08-11 21:01:53.934398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.173 [2024-08-11 21:01:53.934425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.173 [2024-08-11 21:01:53.939042] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.173 [2024-08-11 21:01:53.939302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.173 [2024-08-11 21:01:53.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.173 [2024-08-11 21:01:53.943860] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.173 [2024-08-11 21:01:53.944111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.173 [2024-08-11 21:01:53.944136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.948591] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.948912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.948937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.953309] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.953561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.953587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.958093] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.958349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.958389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.962729] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.962979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.963001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.967298] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.967542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.967567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.972067] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.972313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.972339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.976638] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.976884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.976908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.981139] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.981382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.981407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.985759] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.986006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.986031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.990326] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.990591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.990625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.994851] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.995094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.995119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:53.999349] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:53.999607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:53.999631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.003869] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:54.004113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:54.004138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.008508] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:54.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:54.008796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.013114] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:54.013366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:54.013392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.017674] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:54.017936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:54.017960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.022336] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:54.022583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:54.022616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.026856] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.433 [2024-08-11 21:01:54.027103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.433 [2024-08-11 21:01:54.027128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.433 [2024-08-11 21:01:54.031372] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.031634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.031659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.035866] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.036110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.036135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.040344] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.040603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.040626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.044838] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.045083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.045108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.049292] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.049537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.049563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.053768] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.054012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.054036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.058309] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.058556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.058581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.062804] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.063048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.063072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.067340] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.067586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.067624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.071859] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.072106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.072131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.076345] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.076601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.076626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.080859] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.081102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.081128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.085329] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.085574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.085606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.089854] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.090106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.090130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.094343] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.094590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.094624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.098882] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.099131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.099156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.103386] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.103644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.103664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.107891] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.108133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.108159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.112391] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.112651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.112675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.116904] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.117148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.117173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.121396] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.121652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.121676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.126024] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.126282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.126307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.130565] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.130826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.130851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.135063] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.135307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.135332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.139588] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.139847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.139871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.144182] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.144429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.144454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.148915] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.149188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.149215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.153966] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.434 [2024-08-11 21:01:54.154244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.434 [2024-08-11 21:01:54.154269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.434 [2024-08-11 21:01:54.158469] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.158731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.158779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.163273] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.163518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.163543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.167980] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.168251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.168277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.172734] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.172995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.173019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.177289] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.177534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.177558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.181829] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.182085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.182109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.186393] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.186652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.186676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.190954] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.191198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.191223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.195616] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.195915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.195940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.200416] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.200679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.200703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.435 [2024-08-11 21:01:54.204984] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.435 [2024-08-11 21:01:54.205230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.435 [2024-08-11 21:01:54.205255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.209439] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.209696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.209721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.213919] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.214174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.214215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.218578] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.218864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.223362] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.223627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.223652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.228019] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.228266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.228292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.232527] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.232783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.237065] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.237313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.237338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.241621] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.241865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.241889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.246264] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.246511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.246536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.250739] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.250983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.251008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.695 [2024-08-11 21:01:54.255316] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.695 [2024-08-11 21:01:54.255561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.695 [2024-08-11 21:01:54.255586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.259794] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.260039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.260063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.264298] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.264544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.264569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.268812] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.269057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.269082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.273335] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.273580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.273616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.277811] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.278057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.278089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.282333] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.282579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.282614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.286836] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.287084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.287109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.291363] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.291622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.291646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.295875] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.296119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.296144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.300368] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.300625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.300650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.304892] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.305138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.305163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.309376] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.309634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.313931] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.314188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.314212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.318576] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.318833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.318857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.323129] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.323376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.323401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.327606] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.327852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.327876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.332131] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.332380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.332406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.336623] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.336868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.336893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.341316] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.341570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.341604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.345825] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.346070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.350339] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.350584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.350618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.354820] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.355065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.355090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.359332] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.359576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.359612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.363851] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.364096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.364121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.368445] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.368702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.368727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.372967] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.373215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.373240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.696 [2024-08-11 21:01:54.377449] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.696 [2024-08-11 21:01:54.377707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.696 [2024-08-11 21:01:54.377731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.381959] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.382232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.382258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.386550] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.386809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.386834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.391035] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.391279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.391304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.395517] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.395775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.395799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.400016] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.400263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.400288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.404516] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.404773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.404798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.409032] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.409280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.409306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.413506] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.413762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.413786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.418000] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.418253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.418278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.422500] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.422759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.422784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.427050] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.427295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.427321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.431615] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.431860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.431886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.436093] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.436342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.436366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.440639] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.440886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.440910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.445229] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.445474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.445498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.449723] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.449968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.449992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.454257] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.454504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.454529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.458768] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.459014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.459039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.463303] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.463547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.463572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.697 [2024-08-11 21:01:54.467807] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.697 [2024-08-11 21:01:54.468052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.697 [2024-08-11 21:01:54.468076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.472353] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.472614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.472638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.476922] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.477168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.477193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.481417] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.481675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.485925] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.486181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.486205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.490459] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.490718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.490743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.494976] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.495223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.495247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.499496] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.499754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.499779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.504016] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.504262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.504287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.508470] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.508729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.508753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.513012] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.513259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.513284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.517478] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.517733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.517757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.522022] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.522278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.522303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.526609] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.526853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.526878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.531083] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.531330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.531356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.535618] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.535863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.535888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.540130] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.540378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.540403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.544669] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.544914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.544938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.549216] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.549461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.549485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.553711] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.553959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.553983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.558240] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.558505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.558530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.562771] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.563016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.958 [2024-08-11 21:01:54.563040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.958 [2024-08-11 21:01:54.567269] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.958 [2024-08-11 21:01:54.567515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.567540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.571815] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.572060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.572085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.576317] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.576562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.576587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.580816] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.581061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.581086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.585322] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.585572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.585605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.589797] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.590041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.590066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.594292] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.594538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.594563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.598779] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.599024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.599048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.603270] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.603519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.603544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.607773] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.608017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.608042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.612206] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.612451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.612476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.616676] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.616924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.616948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.621165] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.621408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.621433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.625674] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.625921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.625946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.630198] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.630442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.630467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.634728] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.634975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.634999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.639219] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.639468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.639493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.643718] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.643964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.643990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.648227] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.648473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.648499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.652734] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.652980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.653005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.657214] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.657460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.657485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.661726] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.661970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.661995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.666267] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.666515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.666539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.670770] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.671013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.671037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.675293] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.675537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.675562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.679942] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.680188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.680213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.684448] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.684704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.684728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.688935] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.959 [2024-08-11 21:01:54.689181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.959 [2024-08-11 21:01:54.689205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.959 [2024-08-11 21:01:54.693431] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.693693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.693717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.697925] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.698177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.698202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.702434] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.702694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.702718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.707012] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.707256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.707281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.711533] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.711791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.711815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.716033] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.716278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.716303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.720577] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.720836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.720860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.725103] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.725351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.725376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.960 [2024-08-11 21:01:54.729608] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:43.960 [2024-08-11 21:01:54.729851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.960 [2024-08-11 21:01:54.729876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.219 [2024-08-11 21:01:54.734168] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.219 [2024-08-11 21:01:54.734416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.734441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.738768] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.739016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.739041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.743240] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.743484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.743509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.747789] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.748034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.748060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.752321] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.752566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.752602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.756912] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.757177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.757202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.761420] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.761675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.761699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.765990] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.766246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.766270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.770491] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.770749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.770773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.775063] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.775309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.775334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.779586] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.779842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.779866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.784253] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.784508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.784533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.788844] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.789098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.789123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.793623] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.793890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.793916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.798425] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.798742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.803312] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.803562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.803588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.808226] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.808472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.808497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.813002] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.813247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.813273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.817733] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.817997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.818022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.822531] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.822807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.822832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.827180] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.827425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.827450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.831648] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.831892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.831917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.836191] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.836438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.836463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.840706] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.840972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.840997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.845414] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.845671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.845696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.849995] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.850261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.850285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.854551] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.854813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.854837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.859180] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.859426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.220 [2024-08-11 21:01:54.859451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.220 [2024-08-11 21:01:54.863694] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.220 [2024-08-11 21:01:54.863939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.221 [2024-08-11 21:01:54.863964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.221 [2024-08-11 21:01:54.868189] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.221 [2024-08-11 21:01:54.868438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.221 [2024-08-11 21:01:54.868462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.221 [2024-08-11 21:01:54.872688] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.221 [2024-08-11 21:01:54.872934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.221 [2024-08-11 21:01:54.872958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:44.221 [2024-08-11 21:01:54.877229] tcp.c:2166:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e4efc0) with pdu=0x2000190fef90 00:20:44.221 [2024-08-11 21:01:54.877475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.221 [2024-08-11 21:01:54.877500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.221 00:20:44.221 Latency(us) 00:20:44.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:44.221 nvme0n1 : 2.00 6798.08 849.76 0.00 0.00 2348.41 2100.13 5302.46 00:20:44.221 =================================================================================================================== 00:20:44.221 Total : 6798.08 849.76 0.00 0.00 2348.41 2100.13 5302.46 00:20:44.221 0 00:20:44.221 21:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:44.221 21:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:44.221 21:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:44.221 | .driver_specific 00:20:44.221 | .nvme_error 00:20:44.221 | .status_code 00:20:44.221 | .command_transient_transport_error' 00:20:44.221 21:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 438 > 0 )) 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92855 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 92855 ']' 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 92855 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92855 00:20:44.480 killing process with pid 92855 00:20:44.480 Received shutdown signal, test time was about 2.000000 seconds 00:20:44.480 00:20:44.480 Latency(us) 00:20:44.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.480 =================================================================================================================== 00:20:44.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92855' 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 92855 00:20:44.480 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 92855 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 92671 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 92671 ']' 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 92671 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92671 00:20:44.739 killing process with pid 92671 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92671' 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 92671 00:20:44.739 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 92671 00:20:44.997 ************************************ 00:20:44.997 END TEST nvmf_digest_error 00:20:44.997 ************************************ 00:20:44.997 00:20:44.997 real 0m15.327s 00:20:44.997 user 0m29.581s 00:20:44.997 sys 0m4.726s 00:20:44.997 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:44.997 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:44.997 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:44.997 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:44.998 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@508 -- # nvmfcleanup 00:20:44.998 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:20:44.998 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.998 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:20:44.998 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.998 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:45.257 rmmod nvme_tcp 00:20:45.257 rmmod nvme_fabrics 00:20:45.257 rmmod nvme_keyring 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@509 -- # '[' -n 92671 ']' 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@510 -- # killprocess 92671 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 92671 ']' 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 92671 00:20:45.257 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92671) - No such process 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 92671 is not found' 00:20:45.257 Process with pid 92671 is not found 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # iptr 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@783 -- # iptables-save 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@783 -- # iptables-restore 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:20:45.257 21:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:20:45.257 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.516 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.516 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # remove_spdk_ns 00:20:45.516 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.516 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.516 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.516 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # return 0 00:20:45.516 00:20:45.516 real 0m32.470s 00:20:45.516 user 1m1.185s 00:20:45.516 sys 0m9.777s 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:45.517 ************************************ 00:20:45.517 END TEST nvmf_digest 00:20:45.517 ************************************ 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.517 ************************************ 00:20:45.517 START TEST nvmf_host_multipath 00:20:45.517 ************************************ 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.517 * Looking for test storage... 00:20:45.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # prepare_net_devs 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@430 -- # local -g is_hw=no 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # remove_spdk_ns 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # nvmf_veth_init 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.517 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.518 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.518 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.518 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.518 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.518 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:20:45.775 Cannot find device "nvmf_init_br" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:20:45.776 Cannot find device "nvmf_init_br2" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:20:45.776 Cannot find device "nvmf_tgt_br" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.776 Cannot find device "nvmf_tgt_br2" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:20:45.776 Cannot find device "nvmf_init_br" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:20:45.776 Cannot find device "nvmf_init_br2" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:20:45.776 Cannot find device "nvmf_tgt_br" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:20:45.776 Cannot find device "nvmf_tgt_br2" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:20:45.776 Cannot find device "nvmf_br" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:20:45.776 Cannot find device "nvmf_init_if" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:20:45.776 Cannot find device "nvmf_init_if2" 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:45.776 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:20:46.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:20:46.035 00:20:46.035 --- 10.0.0.3 ping statistics --- 00:20:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.035 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:20:46.035 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:46.035 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:46.035 00:20:46.035 --- 10.0.0.4 ping statistics --- 00:20:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.035 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:20:46.035 00:20:46.035 --- 10.0.0.1 ping statistics --- 00:20:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.035 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:46.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:20:46.035 00:20:46.035 --- 10.0.0.2 ping statistics --- 00:20:46.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.035 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@453 -- # return 0 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@501 -- # nvmfpid=93153 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # waitforlisten 93153 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 93153 ']' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.035 21:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.035 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:20:46.036 [2024-08-11 21:01:56.779452] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:20:46.036 [2024-08-11 21:01:56.779572] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.294 [2024-08-11 21:01:56.921478] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.294 [2024-08-11 21:01:57.053734] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.294 [2024-08-11 21:01:57.053808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.294 [2024-08-11 21:01:57.053822] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.294 [2024-08-11 21:01:57.053833] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.294 [2024-08-11 21:01:57.053842] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.294 [2024-08-11 21:01:57.053961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.294 [2024-08-11 21:01:57.054275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.558 [2024-08-11 21:01:57.131069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=93153 00:20:47.126 21:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:47.384 [2024-08-11 21:01:58.123361] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.384 21:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:47.643 Malloc0 00:20:47.902 21:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:48.161 21:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.419 21:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:48.678 [2024-08-11 21:01:59.217443] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:48.678 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:48.937 [2024-08-11 21:01:59.517498] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=93209 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 93209 /var/tmp/bdevperf.sock 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 93209 ']' 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:48.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:48.937 21:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:49.872 21:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:49.872 21:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:20:49.872 21:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:50.130 21:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:50.388 Nvme0n1 00:20:50.646 21:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:50.904 Nvme0n1 00:20:50.904 21:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:50.904 21:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:51.839 21:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:51.839 21:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:52.097 21:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:52.356 21:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:52.356 21:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=93260 00:20:52.356 21:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:52.356 21:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:58.920 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.921 Attaching 4 probes... 00:20:58.921 @path[10.0.0.3, 4421]: 15635 00:20:58.921 @path[10.0.0.3, 4421]: 16105 00:20:58.921 @path[10.0.0.3, 4421]: 16079 00:20:58.921 @path[10.0.0.3, 4421]: 16155 00:20:58.921 @path[10.0.0.3, 4421]: 16145 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 93260 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:58.921 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:59.180 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:59.180 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:59.180 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=93372 00:20:59.180 21:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:05.745 21:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:05.745 21:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.745 Attaching 4 probes... 00:21:05.745 @path[10.0.0.3, 4420]: 19551 00:21:05.745 @path[10.0.0.3, 4420]: 19768 00:21:05.745 @path[10.0.0.3, 4420]: 19936 00:21:05.745 @path[10.0.0.3, 4420]: 19515 00:21:05.745 @path[10.0.0.3, 4420]: 19880 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 93372 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:05.745 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:06.004 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:06.321 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:06.321 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:06.321 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=93486 00:21:06.321 21:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:12.884 21:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:12.884 21:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.884 Attaching 4 probes... 00:21:12.884 @path[10.0.0.3, 4421]: 16818 00:21:12.884 @path[10.0.0.3, 4421]: 20358 00:21:12.884 @path[10.0.0.3, 4421]: 20306 00:21:12.884 @path[10.0.0.3, 4421]: 20400 00:21:12.884 @path[10.0.0.3, 4421]: 20448 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 93486 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:12.884 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:13.142 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:13.142 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=93604 00:21:13.142 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:13.142 21:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:19.706 21:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:19.706 21:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:19.706 21:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:19.706 21:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.706 Attaching 4 probes... 00:21:19.706 00:21:19.706 00:21:19.706 00:21:19.706 00:21:19.706 00:21:19.706 21:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:19.706 21:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 93604 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:19.706 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:19.965 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:19.965 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=93715 00:21:19.965 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:19.965 21:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.530 Attaching 4 probes... 00:21:26.530 @path[10.0.0.3, 4421]: 18822 00:21:26.530 @path[10.0.0.3, 4421]: 19229 00:21:26.530 @path[10.0.0.3, 4421]: 19302 00:21:26.530 @path[10.0.0.3, 4421]: 19312 00:21:26.530 @path[10.0.0.3, 4421]: 19328 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 93715 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.530 21:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:26.530 21:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:27.466 21:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:27.466 21:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=93840 00:21:27.466 21:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:27.466 21:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.030 Attaching 4 probes... 00:21:34.030 @path[10.0.0.3, 4420]: 17953 00:21:34.030 @path[10.0.0.3, 4420]: 18132 00:21:34.030 @path[10.0.0.3, 4420]: 18234 00:21:34.030 @path[10.0.0.3, 4420]: 18256 00:21:34.030 @path[10.0.0.3, 4420]: 18309 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 93840 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:34.030 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:34.359 [2024-08-11 21:02:44.851134] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:34.359 21:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:34.618 21:02:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:41.180 21:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:41.180 21:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94016 00:21:41.180 21:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 93153 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:41.180 21:02:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:46.451 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:46.451 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.710 Attaching 4 probes... 00:21:46.710 @path[10.0.0.3, 4421]: 19007 00:21:46.710 @path[10.0.0.3, 4421]: 19463 00:21:46.710 @path[10.0.0.3, 4421]: 19508 00:21:46.710 @path[10.0.0.3, 4421]: 19373 00:21:46.710 @path[10.0.0.3, 4421]: 19437 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94016 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 93209 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 93209 ']' 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 93209 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:21:46.710 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.711 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93209 00:21:46.978 killing process with pid 93209 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93209' 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 93209 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 93209 00:21:46.978 Connection closed with partial response: 00:21:46.978 00:21:46.978 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 93209 00:21:46.978 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.978 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:21:46.978 [2024-08-11 21:01:59.592130] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:21:46.978 [2024-08-11 21:01:59.592259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93209 ] 00:21:46.978 [2024-08-11 21:01:59.728176] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.978 [2024-08-11 21:01:59.853651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.978 [2024-08-11 21:01:59.927359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:46.978 Running I/O for 90 seconds... 00:21:46.978 [2024-08-11 21:02:09.914118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.978 [2024-08-11 21:02:09.914460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.978 [2024-08-11 21:02:09.914493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.978 [2024-08-11 21:02:09.914513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.978 [2024-08-11 21:02:09.914526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.914572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.914588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.914624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.914639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.914658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.914671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.914691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.914705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.914724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.914739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.914758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.914773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.915856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.915891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.915924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.915957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.915978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.915993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.916013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.916028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.916047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.916062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.916082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.916095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.916122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.979 [2024-08-11 21:02:09.916139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.916159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.979 [2024-08-11 21:02:09.916173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.979 [2024-08-11 21:02:09.916192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.916707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.916963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.980 [2024-08-11 21:02:09.916985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.980 [2024-08-11 21:02:09.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.980 [2024-08-11 21:02:09.917637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.917672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.917706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.917739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.917773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.917807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.917879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.917933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.917967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.917987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.918267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.918281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.981 [2024-08-11 21:02:09.919554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.919971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.919991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.920006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.920027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.920041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.920070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.920086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:09.920106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:09.920120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.981 [2024-08-11 21:02:16.557802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.981 [2024-08-11 21:02:16.557816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.557836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.557849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.557869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.557883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.557902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.557916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.557936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.557975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.557997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.982 [2024-08-11 21:02:16.558761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.558977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.558998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.559012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.559036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.982 [2024-08-11 21:02:16.559052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.982 [2024-08-11 21:02:16.559071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.559867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.559971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.559991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.560005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.560038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.560072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.560106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.983 [2024-08-11 21:02:16.560149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.983 [2024-08-11 21:02:16.560391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.983 [2024-08-11 21:02:16.560411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.560960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.560980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.984 [2024-08-11 21:02:16.561311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.984 [2024-08-11 21:02:16.561841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.984 [2024-08-11 21:02:16.561856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:16.562535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:16.562902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:16.562926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.662746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.662820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.662875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.662893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.662914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.662928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.662948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.662961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.662980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.662993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.985 [2024-08-11 21:02:23.663385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:46.985 [2024-08-11 21:02:23.663908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.985 [2024-08-11 21:02:23.663922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.663942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.663957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.663992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.664492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.664986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.665019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.986 [2024-08-11 21:02:23.665052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:46.986 [2024-08-11 21:02:23.665321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.986 [2024-08-11 21:02:23.665334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.665367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.665977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.665992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.987 [2024-08-11 21:02:23.666541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:46.987 [2024-08-11 21:02:23.666607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.987 [2024-08-11 21:02:23.666625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.666982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.666996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.667031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.667706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.667754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:23.667795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.667835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.667876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.667917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.667957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.667983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.667997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.668023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.668037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.668064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.668078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:23.668138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:23.668160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.217998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.988 [2024-08-11 21:02:37.218442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:37.218473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:37.218500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:37.218528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:37.218556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:37.218584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.988 [2024-08-11 21:02:37.218632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.988 [2024-08-11 21:02:37.218647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.218913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.218941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.218970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.218986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.989 [2024-08-11 21:02:37.219380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.989 [2024-08-11 21:02:37.219853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.989 [2024-08-11 21:02:37.219866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.219881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.219893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.219908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.219921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.219936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.219956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.219972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.219985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.990 [2024-08-11 21:02:37.220833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.220971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.220985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.221000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.221014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.221029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.990 [2024-08-11 21:02:37.221043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.990 [2024-08-11 21:02:37.221058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.991 [2024-08-11 21:02:37.221428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.991 [2024-08-11 21:02:37.221885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.221942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.991 [2024-08-11 21:02:37.221963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.991 [2024-08-11 21:02:37.221975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110536 len:8 PRP1 0x0 PRP2 0x0 00:21:46.991 [2024-08-11 21:02:37.221989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.991 [2024-08-11 21:02:37.222046] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d39100 was disconnected and freed. reset controller. 00:21:46.991 [2024-08-11 21:02:37.223096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.991 [2024-08-11 21:02:37.223179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b830 (9): Bad file descriptor 00:21:46.991 [2024-08-11 21:02:37.223518] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.991 [2024-08-11 21:02:37.223548] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3b830 with addr=10.0.0.3, port=4421 00:21:46.991 [2024-08-11 21:02:37.223565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b830 is same with the state(6) to be set 00:21:46.991 [2024-08-11 21:02:37.223629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b830 (9): Bad file descriptor 00:21:46.991 [2024-08-11 21:02:37.223666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.991 [2024-08-11 21:02:37.223683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.991 [2024-08-11 21:02:37.223697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.991 [2024-08-11 21:02:37.223749] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.991 [2024-08-11 21:02:37.223768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.991 [2024-08-11 21:02:47.282098] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.991 Received shutdown signal, test time was about 55.848173 seconds 00:21:46.991 00:21:46.991 Latency(us) 00:21:46.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.991 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:46.992 Verification LBA range: start 0x0 length 0x4000 00:21:46.992 Nvme0n1 : 55.85 8056.11 31.47 0.00 0.00 15858.90 927.19 7015926.69 00:21:46.992 =================================================================================================================== 00:21:46.992 Total : 8056.11 31.47 0.00 0.00 15858.90 927.19 7015926.69 00:21:46.992 21:02:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.251 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:47.251 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # nvmfcleanup 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.510 rmmod nvme_tcp 00:21:47.510 rmmod nvme_fabrics 00:21:47.510 rmmod nvme_keyring 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # '[' -n 93153 ']' 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # killprocess 93153 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 93153 ']' 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 93153 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93153 00:21:47.510 killing process with pid 93153 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93153' 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 93153 00:21:47.510 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 93153 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@293 -- # iptr 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@783 -- # iptables-save 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@783 -- # iptables-restore 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:21:47.769 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # remove_spdk_ns 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@296 -- # return 0 00:21:48.028 00:21:48.028 real 1m2.473s 00:21:48.028 user 2m52.446s 00:21:48.028 sys 0m19.485s 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:48.028 ************************************ 00:21:48.028 END TEST nvmf_host_multipath 00:21:48.028 ************************************ 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.028 ************************************ 00:21:48.028 START TEST nvmf_timeout 00:21:48.028 ************************************ 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:48.028 * Looking for test storage... 00:21:48.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.028 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # prepare_net_devs 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@430 -- # local -g is_hw=no 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # remove_spdk_ns 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # nvmf_veth_init 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:48.287 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:21:48.288 Cannot find device "nvmf_init_br" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:21:48.288 Cannot find device "nvmf_init_br2" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:21:48.288 Cannot find device "nvmf_tgt_br" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:21:48.288 Cannot find device "nvmf_tgt_br2" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:21:48.288 Cannot find device "nvmf_init_br" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:21:48.288 Cannot find device "nvmf_init_br2" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:21:48.288 Cannot find device "nvmf_tgt_br" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:21:48.288 Cannot find device "nvmf_tgt_br2" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:21:48.288 Cannot find device "nvmf_br" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:21:48.288 Cannot find device "nvmf_init_if" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:21:48.288 Cannot find device "nvmf_init_if2" 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:48.288 21:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:21:48.288 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:21:48.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:48.547 00:21:48.547 --- 10.0.0.3 ping statistics --- 00:21:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.547 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:21:48.547 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:48.547 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:21:48.547 00:21:48.547 --- 10.0.0.4 ping statistics --- 00:21:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.547 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:48.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:21:48.547 00:21:48.547 --- 10.0.0.1 ping statistics --- 00:21:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.547 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:48.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:48.547 00:21:48.547 --- 10.0.0.2 ping statistics --- 00:21:48.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.547 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@453 -- # return 0 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # '[' '' == iso ']' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@501 -- # nvmfpid=94371 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:48.547 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # waitforlisten 94371 00:21:48.548 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 94371 ']' 00:21:48.548 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.548 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:48.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.548 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.548 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:48.548 21:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.548 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:21:48.548 [2024-08-11 21:02:59.306192] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:21:48.548 [2024-08-11 21:02:59.306312] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.806 [2024-08-11 21:02:59.448168] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:48.806 [2024-08-11 21:02:59.541931] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.806 [2024-08-11 21:02:59.542000] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.806 [2024-08-11 21:02:59.542014] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.806 [2024-08-11 21:02:59.542024] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.806 [2024-08-11 21:02:59.542033] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.806 [2024-08-11 21:02:59.542268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.806 [2024-08-11 21:02:59.542685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.064 [2024-08-11 21:02:59.601143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.631 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:49.890 [2024-08-11 21:03:00.550214] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.890 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:50.148 Malloc0 00:21:50.148 21:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.407 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:50.974 [2024-08-11 21:03:01.676580] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=94426 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 94426 /var/tmp/bdevperf.sock 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 94426 ']' 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:50.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:50.974 21:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:50.974 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:21:50.974 [2024-08-11 21:03:01.747389] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:21:50.974 [2024-08-11 21:03:01.747510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94426 ] 00:21:51.233 [2024-08-11 21:03:01.886014] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.233 [2024-08-11 21:03:01.991751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.492 [2024-08-11 21:03:02.050162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:52.060 21:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:52.060 21:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:21:52.060 21:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:52.319 21:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:52.578 NVMe0n1 00:21:52.578 21:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=94444 00:21:52.578 21:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.578 21:03:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:52.836 Running I/O for 10 seconds... 00:21:53.771 21:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:54.032 [2024-08-11 21:03:04.596789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.597651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.597811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.597901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.597972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.598066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.598196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.598428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.598528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.598636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.598723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.598816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.598884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.599144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.599244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.599319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.599389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.599465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.599549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.599830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.599936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.600016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.600082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.032 [2024-08-11 21:03:04.600158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.032 [2024-08-11 21:03:04.600236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.600452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.600550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.600650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.600721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.600825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.600895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.601006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.601135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.601415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.601571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.601763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.601930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.601996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.602106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.602307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.602456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.602478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.033 [2024-08-11 21:03:04.602499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.602985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.602995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.603004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.603014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.033 [2024-08-11 21:03:04.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.033 [2024-08-11 21:03:04.603033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.034 [2024-08-11 21:03:04.603730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.034 [2024-08-11 21:03:04.603767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.034 [2024-08-11 21:03:04.603787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.034 [2024-08-11 21:03:04.603807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.034 [2024-08-11 21:03:04.603826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.034 [2024-08-11 21:03:04.603851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.034 [2024-08-11 21:03:04.603871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.034 [2024-08-11 21:03:04.603882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.603891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.603901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:54.035 [2024-08-11 21:03:04.603910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.603920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.603929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.603939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.603948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.603967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.603975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.603987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.603996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:54.035 [2024-08-11 21:03:04.604542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e09b0 is same with the state(6) to be set 00:21:54.035 [2024-08-11 21:03:04.604566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:54.035 [2024-08-11 21:03:04.604573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:54.035 [2024-08-11 21:03:04.604581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60536 len:8 PRP1 0x0 PRP2 0x0 00:21:54.035 [2024-08-11 21:03:04.604590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604655] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23e09b0 was disconnected and freed. reset controller. 00:21:54.035 [2024-08-11 21:03:04.604764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.035 [2024-08-11 21:03:04.604780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.035 [2024-08-11 21:03:04.604800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.035 [2024-08-11 21:03:04.604818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:54.035 [2024-08-11 21:03:04.604836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:54.035 [2024-08-11 21:03:04.604845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4e20 is same with the state(6) to be set 00:21:54.035 [2024-08-11 21:03:04.605064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.035 [2024-08-11 21:03:04.605089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4e20 (9): Bad file descriptor 00:21:54.035 [2024-08-11 21:03:04.605194] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.036 [2024-08-11 21:03:04.605214] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e4e20 with addr=10.0.0.3, port=4420 00:21:54.036 [2024-08-11 21:03:04.605224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4e20 is same with the state(6) to be set 00:21:54.036 [2024-08-11 21:03:04.605243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4e20 (9): Bad file descriptor 00:21:54.036 [2024-08-11 21:03:04.605257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:54.036 [2024-08-11 21:03:04.605266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:54.036 [2024-08-11 21:03:04.605276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:54.036 [2024-08-11 21:03:04.605295] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:54.036 [2024-08-11 21:03:04.605306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:54.036 21:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:55.971 [2024-08-11 21:03:06.605609] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.971 [2024-08-11 21:03:06.605699] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e4e20 with addr=10.0.0.3, port=4420 00:21:55.971 [2024-08-11 21:03:06.605717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4e20 is same with the state(6) to be set 00:21:55.971 [2024-08-11 21:03:06.605744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4e20 (9): Bad file descriptor 00:21:55.971 [2024-08-11 21:03:06.605763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:55.971 [2024-08-11 21:03:06.605773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:55.971 [2024-08-11 21:03:06.605785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:55.971 [2024-08-11 21:03:06.605813] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:55.971 [2024-08-11 21:03:06.605825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:55.971 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:55.971 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:55.971 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:56.229 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:56.229 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:56.229 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:56.229 21:03:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:56.488 21:03:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:56.488 21:03:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:57.864 [2024-08-11 21:03:08.606263] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.864 [2024-08-11 21:03:08.606331] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e4e20 with addr=10.0.0.3, port=4420 00:21:57.864 [2024-08-11 21:03:08.606364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e4e20 is same with the state(6) to be set 00:21:57.864 [2024-08-11 21:03:08.606499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4e20 (9): Bad file descriptor 00:21:57.864 [2024-08-11 21:03:08.606546] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:57.864 [2024-08-11 21:03:08.606560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:57.864 [2024-08-11 21:03:08.606589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.864 [2024-08-11 21:03:08.606708] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:57.864 [2024-08-11 21:03:08.606727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.396 [2024-08-11 21:03:10.606788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.396 [2024-08-11 21:03:10.606868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:00.396 [2024-08-11 21:03:10.606897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:00.396 [2024-08-11 21:03:10.606910] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:00.396 [2024-08-11 21:03:10.606941] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:00.963 00:22:00.963 Latency(us) 00:22:00.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.963 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:00.963 Verification LBA range: start 0x0 length 0x4000 00:22:00.963 NVMe0n1 : 8.13 925.60 3.62 15.75 0.00 135776.99 3232.12 7046430.72 00:22:00.963 =================================================================================================================== 00:22:00.963 Total : 925.60 3.62 15.75 0.00 135776.99 3232.12 7046430.72 00:22:00.963 0 00:22:01.530 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:01.530 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.530 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:01.788 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:01.789 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:01.789 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:01.789 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 94444 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 94426 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 94426 ']' 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 94426 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:02.047 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94426 00:22:02.306 killing process with pid 94426 00:22:02.306 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:02.306 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:02.306 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94426' 00:22:02.306 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 94426 00:22:02.306 Received shutdown signal, test time was about 9.366009 seconds 00:22:02.306 00:22:02.306 Latency(us) 00:22:02.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.306 =================================================================================================================== 00:22:02.306 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.306 21:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 94426 00:22:02.306 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:02.566 [2024-08-11 21:03:13.317729] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=94570 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 94570 /var/tmp/bdevperf.sock 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 94570 ']' 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:02.566 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.825 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:02.825 21:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:02.825 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:22:02.825 [2024-08-11 21:03:13.394776] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:22:02.825 [2024-08-11 21:03:13.394895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94570 ] 00:22:02.825 [2024-08-11 21:03:13.528539] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.083 [2024-08-11 21:03:13.626550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.083 [2024-08-11 21:03:13.680403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:03.650 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:03.650 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:03.650 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:03.909 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:04.475 NVMe0n1 00:22:04.475 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=94600 00:22:04.475 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:04.475 21:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:04.475 Running I/O for 10 seconds... 00:22:05.412 21:03:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:05.673 [2024-08-11 21:03:16.274842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.673 [2024-08-11 21:03:16.274936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.274966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.673 [2024-08-11 21:03:16.274977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.274990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.673 [2024-08-11 21:03:16.274999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.673 [2024-08-11 21:03:16.275020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.673 [2024-08-11 21:03:16.275223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.673 [2024-08-11 21:03:16.275233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.674 [2024-08-11 21:03:16.275866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.275986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.275995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.674 [2024-08-11 21:03:16.276131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.674 [2024-08-11 21:03:16.276145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.675 [2024-08-11 21:03:16.276865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.276985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.276996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.675 [2024-08-11 21:03:16.277015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.675 [2024-08-11 21:03:16.277025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:05.676 [2024-08-11 21:03:16.277366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.676 [2024-08-11 21:03:16.277691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16639a0 is same with the state(6) to be set 00:22:05.676 [2024-08-11 21:03:16.277715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:05.676 [2024-08-11 21:03:16.277722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:05.676 [2024-08-11 21:03:16.277731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82168 len:8 PRP1 0x0 PRP2 0x0 00:22:05.676 [2024-08-11 21:03:16.277745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:05.676 [2024-08-11 21:03:16.277815] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16639a0 was disconnected and freed. reset controller. 00:22:05.676 [2024-08-11 21:03:16.278102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.676 [2024-08-11 21:03:16.278216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:05.676 [2024-08-11 21:03:16.278346] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:05.676 [2024-08-11 21:03:16.278368] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1667e20 with addr=10.0.0.3, port=4420 00:22:05.676 [2024-08-11 21:03:16.278378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:05.676 [2024-08-11 21:03:16.278397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:05.676 [2024-08-11 21:03:16.278414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:05.676 [2024-08-11 21:03:16.278423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:05.676 [2024-08-11 21:03:16.278435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:05.676 [2024-08-11 21:03:16.278455] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:05.676 [2024-08-11 21:03:16.278467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:05.676 21:03:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:06.613 [2024-08-11 21:03:17.278687] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.613 [2024-08-11 21:03:17.278779] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1667e20 with addr=10.0.0.3, port=4420 00:22:06.613 [2024-08-11 21:03:17.278798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:06.613 [2024-08-11 21:03:17.278828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:06.613 [2024-08-11 21:03:17.278848] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.613 [2024-08-11 21:03:17.278858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.613 [2024-08-11 21:03:17.278871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.613 [2024-08-11 21:03:17.278901] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.613 [2024-08-11 21:03:17.278914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.613 21:03:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:06.871 [2024-08-11 21:03:17.573356] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:06.871 21:03:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 94600 00:22:07.808 [2024-08-11 21:03:18.292102] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.374 00:22:14.374 Latency(us) 00:22:14.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.374 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.374 Verification LBA range: start 0x0 length 0x4000 00:22:14.374 NVMe0n1 : 10.01 6863.75 26.81 0.00 0.00 18611.10 1228.80 3019898.88 00:22:14.374 =================================================================================================================== 00:22:14.374 Total : 6863.75 26.81 0.00 0.00 18611.10 1228.80 3019898.88 00:22:14.374 0 00:22:14.374 21:03:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=94706 00:22:14.374 21:03:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.374 21:03:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:14.632 Running I/O for 10 seconds... 00:22:15.567 21:03:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:15.828 [2024-08-11 21:03:26.410696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.828 [2024-08-11 21:03:26.410783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.410985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.410995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.828 [2024-08-11 21:03:26.411361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.828 [2024-08-11 21:03:26.411372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.411987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.411996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.829 [2024-08-11 21:03:26.412198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.829 [2024-08-11 21:03:26.412209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.412981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.412990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.413001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.413009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.413020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.413029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.413040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.830 [2024-08-11 21:03:26.413049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.830 [2024-08-11 21:03:26.413060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.831 [2024-08-11 21:03:26.413068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.831 [2024-08-11 21:03:26.413088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.831 [2024-08-11 21:03:26.413400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.831 [2024-08-11 21:03:26.413420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ba30 is same with the state(6) to be set 00:22:15.831 [2024-08-11 21:03:26.413442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:15.831 [2024-08-11 21:03:26.413457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:15.831 [2024-08-11 21:03:26.413465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:22:15.831 [2024-08-11 21:03:26.413474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413542] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x169ba30 was disconnected and freed. reset controller. 00:22:15.831 [2024-08-11 21:03:26.413640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.831 [2024-08-11 21:03:26.413656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.831 [2024-08-11 21:03:26.413677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.831 [2024-08-11 21:03:26.413697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.831 [2024-08-11 21:03:26.413716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.831 [2024-08-11 21:03:26.413725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:15.831 [2024-08-11 21:03:26.413934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.831 [2024-08-11 21:03:26.413959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:15.831 [2024-08-11 21:03:26.414063] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.831 [2024-08-11 21:03:26.414111] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1667e20 with addr=10.0.0.3, port=4420 00:22:15.831 [2024-08-11 21:03:26.414123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:15.831 [2024-08-11 21:03:26.414142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:15.831 [2024-08-11 21:03:26.414159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.831 [2024-08-11 21:03:26.414169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:15.831 [2024-08-11 21:03:26.414181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.831 [2024-08-11 21:03:26.414203] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.831 [2024-08-11 21:03:26.414215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.831 21:03:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:16.766 [2024-08-11 21:03:27.414409] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:16.766 [2024-08-11 21:03:27.414499] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1667e20 with addr=10.0.0.3, port=4420 00:22:16.766 [2024-08-11 21:03:27.414518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:16.766 [2024-08-11 21:03:27.414548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:16.766 [2024-08-11 21:03:27.414569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:16.766 [2024-08-11 21:03:27.414580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:16.766 [2024-08-11 21:03:27.414639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:16.766 [2024-08-11 21:03:27.414674] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:16.766 [2024-08-11 21:03:27.414687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.744 [2024-08-11 21:03:28.414877] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:17.744 [2024-08-11 21:03:28.414984] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1667e20 with addr=10.0.0.3, port=4420 00:22:17.744 [2024-08-11 21:03:28.415002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:17.744 [2024-08-11 21:03:28.415033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:17.744 [2024-08-11 21:03:28.415071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:17.744 [2024-08-11 21:03:28.415084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:17.744 [2024-08-11 21:03:28.415096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.744 [2024-08-11 21:03:28.415129] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:17.744 [2024-08-11 21:03:28.415142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.680 [2024-08-11 21:03:29.418766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:18.680 [2024-08-11 21:03:29.418876] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1667e20 with addr=10.0.0.3, port=4420 00:22:18.680 [2024-08-11 21:03:29.418894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667e20 is same with the state(6) to be set 00:22:18.680 [2024-08-11 21:03:29.419144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1667e20 (9): Bad file descriptor 00:22:18.680 [2024-08-11 21:03:29.419382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:18.680 [2024-08-11 21:03:29.419396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:18.680 [2024-08-11 21:03:29.419408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:18.680 [2024-08-11 21:03:29.423317] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:18.680 [2024-08-11 21:03:29.423360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:18.680 21:03:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:18.940 [2024-08-11 21:03:29.704229] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:19.198 21:03:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 94706 00:22:19.765 [2024-08-11 21:03:30.465233] bdev_nvme.c:2058:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.033 00:22:25.033 Latency(us) 00:22:25.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.033 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.033 Verification LBA range: start 0x0 length 0x4000 00:22:25.033 NVMe0n1 : 10.01 5806.79 22.68 3771.42 0.00 13336.79 640.47 3019898.88 00:22:25.033 =================================================================================================================== 00:22:25.033 Total : 5806.79 22.68 3771.42 0.00 13336.79 0.00 3019898.88 00:22:25.033 0 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 94570 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 94570 ']' 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 94570 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94570 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:25.033 killing process with pid 94570 00:22:25.033 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.033 00:22:25.033 Latency(us) 00:22:25.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.033 =================================================================================================================== 00:22:25.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94570' 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 94570 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 94570 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=94815 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 94815 /var/tmp/bdevperf.sock 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 94815 ']' 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:25.033 21:03:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:25.033 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:22:25.033 [2024-08-11 21:03:35.702733] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:22:25.033 [2024-08-11 21:03:35.702869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94815 ] 00:22:25.296 [2024-08-11 21:03:35.843009] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.296 [2024-08-11 21:03:35.970283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.296 [2024-08-11 21:03:36.044061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:26.233 21:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:26.233 21:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:26.233 21:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=94831 00:22:26.233 21:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:26.233 21:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94815 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:26.233 21:03:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:26.492 NVMe0n1 00:22:26.492 21:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=94867 00:22:26.492 21:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:26.492 21:03:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:26.751 Running I/O for 10 seconds... 00:22:27.689 21:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:27.952 [2024-08-11 21:03:38.527755] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527824] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527834] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527842] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527856] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527864] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527872] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527880] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527888] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527895] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527903] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527910] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527919] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527926] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527933] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527941] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527948] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527957] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527964] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527972] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527979] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527986] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.527994] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528001] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528008] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528015] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528022] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528029] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528036] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528043] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528050] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528058] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528069] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528077] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528084] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528092] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528100] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528108] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528115] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528123] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528131] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528138] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528145] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528152] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528160] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528174] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528181] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528188] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528195] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528203] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528210] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528217] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528225] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528232] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528241] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528257] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528265] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528273] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528280] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528289] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528297] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528321] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.952 [2024-08-11 21:03:38.528329] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528337] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528345] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528353] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528361] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528369] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528380] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528387] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528395] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528403] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528411] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528418] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528425] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528433] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528440] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528447] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528465] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528472] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528479] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528487] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528495] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528502] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528509] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528517] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528524] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528531] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528538] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528546] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528553] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528561] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528568] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528577] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528585] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528592] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528600] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528618] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528627] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528634] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528642] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528650] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528658] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528666] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528674] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528682] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528690] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528697] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528704] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528712] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528719] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528726] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528734] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528741] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528748] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528755] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528762] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528769] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528776] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528783] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528790] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528798] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528808] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528816] tcp.c:1706:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1081470 is same with the state(6) to be set 00:22:27.953 [2024-08-11 21:03:38.528905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.528951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.528978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.528990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.953 [2024-08-11 21:03:38.529188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.953 [2024-08-11 21:03:38.529197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.529984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.529996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.530005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.530016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.530026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.530038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.530047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.530058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.530076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.954 [2024-08-11 21:03:38.530088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.954 [2024-08-11 21:03:38.530098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.955 [2024-08-11 21:03:38.530971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.955 [2024-08-11 21:03:38.530982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.956 [2024-08-11 21:03:38.531711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.956 [2024-08-11 21:03:38.531723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7ba50 is same with the state(6) to be set 00:22:27.957 [2024-08-11 21:03:38.531736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:27.957 [2024-08-11 21:03:38.531744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:27.957 [2024-08-11 21:03:38.531753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:8 PRP1 0x0 PRP2 0x0 00:22:27.957 [2024-08-11 21:03:38.531762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.957 [2024-08-11 21:03:38.531838] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c7ba50 was disconnected and freed. reset controller. 00:22:27.957 [2024-08-11 21:03:38.532178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.957 [2024-08-11 21:03:38.532472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ffe0 (9): Bad file descriptor 00:22:27.957 [2024-08-11 21:03:38.532631] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.957 [2024-08-11 21:03:38.532655] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7ffe0 with addr=10.0.0.3, port=4420 00:22:27.957 [2024-08-11 21:03:38.532666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7ffe0 is same with the state(6) to be set 00:22:27.957 [2024-08-11 21:03:38.532686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ffe0 (9): Bad file descriptor 00:22:27.957 [2024-08-11 21:03:38.532702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.957 [2024-08-11 21:03:38.532712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:27.957 [2024-08-11 21:03:38.532723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.957 [2024-08-11 21:03:38.532745] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:27.957 [2024-08-11 21:03:38.532756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:27.957 21:03:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 94867 00:22:29.863 [2024-08-11 21:03:40.533021] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:29.863 [2024-08-11 21:03:40.533460] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7ffe0 with addr=10.0.0.3, port=4420 00:22:29.863 [2024-08-11 21:03:40.533489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7ffe0 is same with the state(6) to be set 00:22:29.863 [2024-08-11 21:03:40.533543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ffe0 (9): Bad file descriptor 00:22:29.863 [2024-08-11 21:03:40.533566] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:29.863 [2024-08-11 21:03:40.533577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:29.863 [2024-08-11 21:03:40.533589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:29.863 [2024-08-11 21:03:40.533643] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:29.863 [2024-08-11 21:03:40.533659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:31.768 [2024-08-11 21:03:42.533884] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.768 [2024-08-11 21:03:42.533978] nvme_tcp.c:2388:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7ffe0 with addr=10.0.0.3, port=4420 00:22:31.768 [2024-08-11 21:03:42.533998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7ffe0 is same with the state(6) to be set 00:22:31.768 [2024-08-11 21:03:42.534031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7ffe0 (9): Bad file descriptor 00:22:31.768 [2024-08-11 21:03:42.534052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:31.768 [2024-08-11 21:03:42.534063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:31.768 [2024-08-11 21:03:42.534085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:31.768 [2024-08-11 21:03:42.534118] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.768 [2024-08-11 21:03:42.534130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.303 [2024-08-11 21:03:44.534216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.303 [2024-08-11 21:03:44.534689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.303 [2024-08-11 21:03:44.534713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.303 [2024-08-11 21:03:44.534726] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:34.303 [2024-08-11 21:03:44.534770] bdev_nvme.c:2056:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.872 00:22:34.872 Latency(us) 00:22:34.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.872 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:34.872 NVMe0n1 : 8.16 2226.08 8.70 15.69 0.00 57000.40 7804.74 7015926.69 00:22:34.872 =================================================================================================================== 00:22:34.872 Total : 2226.08 8.70 15.69 0.00 57000.40 7804.74 7015926.69 00:22:34.872 0 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:34.872 Attaching 5 probes... 00:22:34.872 1334.470087: reset bdev controller NVMe0 00:22:34.872 1334.829292: reconnect bdev controller NVMe0 00:22:34.872 3335.099689: reconnect delay bdev controller NVMe0 00:22:34.872 3335.130361: reconnect bdev controller NVMe0 00:22:34.872 5336.007091: reconnect delay bdev controller NVMe0 00:22:34.872 5336.033072: reconnect bdev controller NVMe0 00:22:34.872 7336.479878: reconnect delay bdev controller NVMe0 00:22:34.872 7336.513740: reconnect bdev controller NVMe0 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 94831 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 94815 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 94815 ']' 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 94815 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94815 00:22:34.872 killing process with pid 94815 00:22:34.872 Received shutdown signal, test time was about 8.234404 seconds 00:22:34.872 00:22:34.872 Latency(us) 00:22:34.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.872 =================================================================================================================== 00:22:34.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94815' 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 94815 00:22:34.872 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 94815 00:22:35.131 21:03:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # nvmfcleanup 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:22:35.700 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.701 rmmod nvme_tcp 00:22:35.701 rmmod nvme_fabrics 00:22:35.701 rmmod nvme_keyring 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # '[' -n 94371 ']' 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # killprocess 94371 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 94371 ']' 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 94371 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94371 00:22:35.701 killing process with pid 94371 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94371' 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 94371 00:22:35.701 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 94371 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # '[' '' == iso ']' 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@293 -- # iptr 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@783 -- # iptables-save 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@783 -- # iptables-restore 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:22:35.960 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:22:35.961 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # remove_spdk_ns 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@296 -- # return 0 00:22:36.220 ************************************ 00:22:36.220 END TEST nvmf_timeout 00:22:36.220 ************************************ 00:22:36.220 00:22:36.220 real 0m48.173s 00:22:36.220 user 2m20.840s 00:22:36.220 sys 0m6.182s 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:36.220 ************************************ 00:22:36.220 END TEST nvmf_host 00:22:36.220 ************************************ 00:22:36.220 00:22:36.220 real 5m52.919s 00:22:36.220 user 16m29.557s 00:22:36.220 sys 1m21.012s 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:36.220 21:03:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.220 ************************************ 00:22:36.220 END TEST nvmf_tcp 00:22:36.220 ************************************ 00:22:36.220 00:22:36.220 real 14m44.444s 00:22:36.220 user 39m4.918s 00:22:36.220 sys 3m57.146s 00:22:36.220 21:03:46 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:36.220 21:03:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:36.479 21:03:47 -- spdk/autotest.sh@294 -- # [[ 1 -eq 0 ]] 00:22:36.479 21:03:47 -- spdk/autotest.sh@298 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:36.479 21:03:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:36.479 21:03:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:36.479 21:03:47 -- common/autotest_common.sh@10 -- # set +x 00:22:36.479 ************************************ 00:22:36.479 START TEST nvmf_dif 00:22:36.479 ************************************ 00:22:36.479 21:03:47 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:36.479 * Looking for test storage... 00:22:36.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:36.479 21:03:47 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.479 21:03:47 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:36.479 21:03:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.479 21:03:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.479 21:03:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.479 21:03:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.479 21:03:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.480 21:03:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.480 21:03:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:36.480 21:03:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.480 21:03:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:36.480 21:03:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:36.480 21:03:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:36.480 21:03:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:36.480 21:03:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@468 -- # prepare_net_devs 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@430 -- # local -g is_hw=no 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@432 -- # remove_spdk_ns 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.480 21:03:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:36.480 21:03:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@452 -- # nvmf_veth_init 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:22:36.480 Cannot find device "nvmf_init_br" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:22:36.480 Cannot find device "nvmf_init_br2" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:22:36.480 Cannot find device "nvmf_tgt_br" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@160 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:22:36.480 Cannot find device "nvmf_tgt_br2" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@161 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:22:36.480 Cannot find device "nvmf_init_br" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:22:36.480 Cannot find device "nvmf_init_br2" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:22:36.480 Cannot find device "nvmf_tgt_br" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:22:36.480 Cannot find device "nvmf_tgt_br2" 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:36.480 21:03:47 nvmf_dif -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:22:36.739 Cannot find device "nvmf_br" 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:22:36.739 Cannot find device "nvmf_init_if" 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:22:36.739 Cannot find device "nvmf_init_if2" 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:36.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:36.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:22:36.739 21:03:47 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:36.740 21:03:47 nvmf_dif -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:22:36.999 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:36.999 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:36.999 00:22:36.999 --- 10.0.0.3 ping statistics --- 00:22:36.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.999 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:22:36.999 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:36.999 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:22:36.999 00:22:36.999 --- 10.0.0.4 ping statistics --- 00:22:36.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.999 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:36.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:36.999 00:22:36.999 --- 10.0.0.1 ping statistics --- 00:22:36.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.999 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:36.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:22:36.999 00:22:36.999 --- 10.0.0.2 ping statistics --- 00:22:36.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.999 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@453 -- # return 0 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@470 -- # '[' iso == iso ']' 00:22:36.999 21:03:47 nvmf_dif -- nvmf/common.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:37.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:37.258 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:37.258 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:22:37.258 21:03:47 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:37.258 21:03:47 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@501 -- # nvmfpid=95355 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:37.258 21:03:47 nvmf_dif -- nvmf/common.sh@502 -- # waitforlisten 95355 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 95355 ']' 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:37.258 21:03:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:37.518 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:22:37.518 [2024-08-11 21:03:48.039526] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:22:37.518 [2024-08-11 21:03:48.039666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.518 [2024-08-11 21:03:48.181220] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.518 [2024-08-11 21:03:48.275098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.518 [2024-08-11 21:03:48.275435] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.518 [2024-08-11 21:03:48.275465] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.518 [2024-08-11 21:03:48.275476] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.518 [2024-08-11 21:03:48.275486] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.518 [2024-08-11 21:03:48.275523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.777 [2024-08-11 21:03:48.332212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:38.352 21:03:49 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:38.352 21:03:49 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:22:38.352 21:03:49 nvmf_dif -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:22:38.352 21:03:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.352 21:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 21:03:49 nvmf_dif -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.664 21:03:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:38.664 21:03:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:38.664 21:03:49 nvmf_dif -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:38.664 21:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 [2024-08-11 21:03:49.143342] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.664 21:03:49 nvmf_dif -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:38.664 21:03:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:38.664 21:03:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:38.664 21:03:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:38.664 21:03:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 ************************************ 00:22:38.664 START TEST fio_dif_1_default 00:22:38.664 ************************************ 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 bdev_null0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:38.664 [2024-08-11 21:03:49.195484] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@552 -- # config=() 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@552 -- # local subsystem config 00:22:38.664 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:22:38.665 { 00:22:38.665 "params": { 00:22:38.665 "name": "Nvme$subsystem", 00:22:38.665 "trtype": "$TEST_TRANSPORT", 00:22:38.665 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.665 "adrfam": "ipv4", 00:22:38.665 "trsvcid": "$NVMF_PORT", 00:22:38.665 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.665 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.665 "hdgst": ${hdgst:-false}, 00:22:38.665 "ddgst": ${ddgst:-false} 00:22:38.665 }, 00:22:38.665 "method": "bdev_nvme_attach_controller" 00:22:38.665 } 00:22:38.665 EOF 00:22:38.665 )") 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@574 -- # cat 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@576 -- # jq . 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@577 -- # IFS=, 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:22:38.665 "params": { 00:22:38.665 "name": "Nvme0", 00:22:38.665 "trtype": "tcp", 00:22:38.665 "traddr": "10.0.0.3", 00:22:38.665 "adrfam": "ipv4", 00:22:38.665 "trsvcid": "4420", 00:22:38.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:38.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:38.665 "hdgst": false, 00:22:38.665 "ddgst": false 00:22:38.665 }, 00:22:38.665 "method": "bdev_nvme_attach_controller" 00:22:38.665 }' 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:38.665 21:03:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:38.665 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:38.665 fio-3.35 00:22:38.665 Starting 1 thread 00:22:38.665 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:22:50.894 00:22:50.894 filename0: (groupid=0, jobs=1): err= 0: pid=95422: Sun Aug 11 21:03:59 2024 00:22:50.894 read: IOPS=9227, BW=36.0MiB/s (37.8MB/s)(360MiB/10001msec) 00:22:50.894 slat (usec): min=5, max=4046, avg= 8.36, stdev=19.05 00:22:50.894 clat (usec): min=367, max=4951, avg=408.59, stdev=59.63 00:22:50.894 lat (usec): min=374, max=4981, avg=416.96, stdev=63.29 00:22:50.894 clat percentiles (usec): 00:22:50.894 | 1.00th=[ 375], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 392], 00:22:50.894 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 412], 00:22:50.894 | 70.00th=[ 416], 80.00th=[ 424], 90.00th=[ 437], 95.00th=[ 445], 00:22:50.894 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 562], 99.95th=[ 611], 00:22:50.894 | 99.99th=[ 4228] 00:22:50.894 bw ( KiB/s): min=34304, max=37408, per=100.00%, avg=36921.26, stdev=677.78, samples=19 00:22:50.894 iops : min= 8576, max= 9352, avg=9230.32, stdev=169.45, samples=19 00:22:50.894 lat (usec) : 500=99.63%, 750=0.33%, 1000=0.01% 00:22:50.894 lat (msec) : 4=0.01%, 10=0.01% 00:22:50.894 cpu : usr=83.30%, sys=14.61%, ctx=17, majf=0, minf=0 00:22:50.894 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:50.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.894 issued rwts: total=92284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.894 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:50.894 00:22:50.894 Run status group 0 (all jobs): 00:22:50.894 READ: bw=36.0MiB/s (37.8MB/s), 36.0MiB/s-36.0MiB/s (37.8MB/s-37.8MB/s), io=360MiB (378MB), run=10001-10001msec 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.894 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:50.894 ************************************ 00:22:50.894 END TEST fio_dif_1_default 00:22:50.895 ************************************ 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 00:22:50.895 real 0m11.142s 00:22:50.895 user 0m9.066s 00:22:50.895 sys 0m1.788s 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 21:04:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:50.895 21:04:00 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:50.895 21:04:00 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 ************************************ 00:22:50.895 START TEST fio_dif_1_multi_subsystems 00:22:50.895 ************************************ 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 bdev_null0 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 [2024-08-11 21:04:00.395810] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 bdev_null1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@552 -- # config=() 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@552 -- # local subsystem config 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:22:50.895 { 00:22:50.895 "params": { 00:22:50.895 "name": "Nvme$subsystem", 00:22:50.895 "trtype": "$TEST_TRANSPORT", 00:22:50.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.895 "adrfam": "ipv4", 00:22:50.895 "trsvcid": "$NVMF_PORT", 00:22:50.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.895 "hdgst": ${hdgst:-false}, 00:22:50.895 "ddgst": ${ddgst:-false} 00:22:50.895 }, 00:22:50.895 "method": "bdev_nvme_attach_controller" 00:22:50.895 } 00:22:50.895 EOF 00:22:50.895 )") 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@574 -- # cat 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:22:50.895 { 00:22:50.895 "params": { 00:22:50.895 "name": "Nvme$subsystem", 00:22:50.895 "trtype": "$TEST_TRANSPORT", 00:22:50.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:50.895 "adrfam": "ipv4", 00:22:50.895 "trsvcid": "$NVMF_PORT", 00:22:50.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:50.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:50.895 "hdgst": ${hdgst:-false}, 00:22:50.895 "ddgst": ${ddgst:-false} 00:22:50.895 }, 00:22:50.895 "method": "bdev_nvme_attach_controller" 00:22:50.895 } 00:22:50.895 EOF 00:22:50.895 )") 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@574 -- # cat 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@576 -- # jq . 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@577 -- # IFS=, 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:22:50.895 "params": { 00:22:50.895 "name": "Nvme0", 00:22:50.895 "trtype": "tcp", 00:22:50.895 "traddr": "10.0.0.3", 00:22:50.895 "adrfam": "ipv4", 00:22:50.895 "trsvcid": "4420", 00:22:50.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:50.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:50.895 "hdgst": false, 00:22:50.895 "ddgst": false 00:22:50.895 }, 00:22:50.895 "method": "bdev_nvme_attach_controller" 00:22:50.895 },{ 00:22:50.895 "params": { 00:22:50.895 "name": "Nvme1", 00:22:50.895 "trtype": "tcp", 00:22:50.895 "traddr": "10.0.0.3", 00:22:50.895 "adrfam": "ipv4", 00:22:50.895 "trsvcid": "4420", 00:22:50.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:50.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:50.895 "hdgst": false, 00:22:50.895 "ddgst": false 00:22:50.895 }, 00:22:50.895 "method": "bdev_nvme_attach_controller" 00:22:50.895 }' 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:50.895 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:50.896 21:04:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:50.896 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:50.896 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:50.896 fio-3.35 00:22:50.896 Starting 2 threads 00:22:50.896 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:00.871 00:23:00.871 filename0: (groupid=0, jobs=1): err= 0: pid=95582: Sun Aug 11 21:04:11 2024 00:23:00.871 read: IOPS=4965, BW=19.4MiB/s (20.3MB/s)(194MiB/10001msec) 00:23:00.871 slat (usec): min=6, max=730, avg=16.07, stdev= 9.01 00:23:00.871 clat (usec): min=477, max=1990, avg=761.95, stdev=42.81 00:23:00.871 lat (usec): min=503, max=2012, avg=778.02, stdev=45.92 00:23:00.871 clat percentiles (usec): 00:23:00.871 | 1.00th=[ 693], 5.00th=[ 709], 10.00th=[ 717], 20.00th=[ 725], 00:23:00.871 | 30.00th=[ 742], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 766], 00:23:00.871 | 70.00th=[ 775], 80.00th=[ 791], 90.00th=[ 816], 95.00th=[ 840], 00:23:00.871 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 963], 00:23:00.871 | 99.99th=[ 1500] 00:23:00.871 bw ( KiB/s): min=18592, max=20768, per=50.16%, avg=19929.26, stdev=788.00, samples=19 00:23:00.871 iops : min= 4648, max= 5192, avg=4982.32, stdev=197.00, samples=19 00:23:00.871 lat (usec) : 500=0.01%, 750=44.31%, 1000=55.64% 00:23:00.871 lat (msec) : 2=0.04% 00:23:00.871 cpu : usr=90.15%, sys=8.44%, ctx=26, majf=0, minf=1 00:23:00.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.871 issued rwts: total=49656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:00.871 filename1: (groupid=0, jobs=1): err= 0: pid=95583: Sun Aug 11 21:04:11 2024 00:23:00.871 read: IOPS=4967, BW=19.4MiB/s (20.3MB/s)(194MiB/10001msec) 00:23:00.871 slat (nsec): min=6566, max=83089, avg=15748.18, stdev=6843.41 00:23:00.871 clat (usec): min=397, max=1928, avg=763.33, stdev=49.71 00:23:00.871 lat (usec): min=404, max=1953, avg=779.08, stdev=52.79 00:23:00.871 clat percentiles (usec): 00:23:00.871 | 1.00th=[ 660], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 725], 00:23:00.871 | 30.00th=[ 742], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 775], 00:23:00.871 | 70.00th=[ 783], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 848], 00:23:00.871 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 947], 00:23:00.871 | 99.99th=[ 1483] 00:23:00.871 bw ( KiB/s): min=18592, max=20768, per=50.18%, avg=19937.68, stdev=793.54, samples=19 00:23:00.871 iops : min= 4648, max= 5192, avg=4984.42, stdev=198.38, samples=19 00:23:00.871 lat (usec) : 500=0.04%, 750=38.68%, 1000=61.26% 00:23:00.871 lat (msec) : 2=0.02% 00:23:00.871 cpu : usr=90.77%, sys=7.95%, ctx=19, majf=0, minf=0 00:23:00.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.871 issued rwts: total=49676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:00.871 00:23:00.871 Run status group 0 (all jobs): 00:23:00.871 READ: bw=38.8MiB/s (40.7MB/s), 19.4MiB/s-19.4MiB/s (20.3MB/s-20.3MB/s), io=388MiB (407MB), run=10001-10001msec 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:00.871 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:00.872 ************************************ 00:23:00.872 END TEST fio_dif_1_multi_subsystems 00:23:00.872 ************************************ 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:00.872 00:23:00.872 real 0m11.283s 00:23:00.872 user 0m18.952s 00:23:00.872 sys 0m2.005s 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:00.872 21:04:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:01.130 21:04:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:01.130 21:04:11 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:01.130 21:04:11 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:01.130 21:04:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:01.130 ************************************ 00:23:01.130 START TEST fio_dif_rand_params 00:23:01.130 ************************************ 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.130 bdev_null0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.130 [2024-08-11 21:04:11.734483] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@552 -- # config=() 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@552 -- # local subsystem config 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:01.130 { 00:23:01.130 "params": { 00:23:01.130 "name": "Nvme$subsystem", 00:23:01.130 "trtype": "$TEST_TRANSPORT", 00:23:01.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.130 "adrfam": "ipv4", 00:23:01.130 "trsvcid": "$NVMF_PORT", 00:23:01.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.130 "hdgst": ${hdgst:-false}, 00:23:01.130 "ddgst": ${ddgst:-false} 00:23:01.130 }, 00:23:01.130 "method": "bdev_nvme_attach_controller" 00:23:01.130 } 00:23:01.130 EOF 00:23:01.130 )") 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:01.130 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # cat 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@576 -- # jq . 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@577 -- # IFS=, 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:23:01.131 "params": { 00:23:01.131 "name": "Nvme0", 00:23:01.131 "trtype": "tcp", 00:23:01.131 "traddr": "10.0.0.3", 00:23:01.131 "adrfam": "ipv4", 00:23:01.131 "trsvcid": "4420", 00:23:01.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:01.131 "hdgst": false, 00:23:01.131 "ddgst": false 00:23:01.131 }, 00:23:01.131 "method": "bdev_nvme_attach_controller" 00:23:01.131 }' 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:01.131 21:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.389 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:01.389 ... 00:23:01.389 fio-3.35 00:23:01.389 Starting 3 threads 00:23:01.389 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:07.959 00:23:07.959 filename0: (groupid=0, jobs=1): err= 0: pid=95740: Sun Aug 11 21:04:17 2024 00:23:07.959 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5005msec) 00:23:07.959 slat (nsec): min=7133, max=58343, avg=13003.51, stdev=7056.20 00:23:07.959 clat (usec): min=5193, max=12326, avg=11136.38, stdev=475.99 00:23:07.959 lat (usec): min=5201, max=12348, avg=11149.38, stdev=476.33 00:23:07.960 clat percentiles (usec): 00:23:07.960 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:23:07.960 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:23:07.960 | 70.00th=[11338], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:23:07.960 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12387], 00:23:07.960 | 99.99th=[12387] 00:23:07.960 bw ( KiB/s): min=33024, max=36096, per=33.34%, avg=34329.60, stdev=1089.13, samples=10 00:23:07.960 iops : min= 258, max= 282, avg=268.20, stdev= 8.51, samples=10 00:23:07.960 lat (msec) : 10=0.22%, 20=99.78% 00:23:07.960 cpu : usr=94.08%, sys=5.26%, ctx=13, majf=0, minf=0 00:23:07.960 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.960 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.960 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:07.960 filename0: (groupid=0, jobs=1): err= 0: pid=95741: Sun Aug 11 21:04:17 2024 00:23:07.960 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(168MiB/5002msec) 00:23:07.960 slat (nsec): min=7041, max=65666, avg=13352.75, stdev=7454.45 00:23:07.960 clat (usec): min=10103, max=12276, avg=11154.10, stdev=384.22 00:23:07.960 lat (usec): min=10111, max=12297, avg=11167.46, stdev=385.15 00:23:07.960 clat percentiles (usec): 00:23:07.960 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:23:07.960 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:23:07.960 | 70.00th=[11338], 80.00th=[11338], 90.00th=[11600], 95.00th=[11731], 00:23:07.960 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:23:07.960 | 99.99th=[12256] 00:23:07.960 bw ( KiB/s): min=33024, max=36096, per=33.27%, avg=34259.80, stdev=1044.99, samples=10 00:23:07.960 iops : min= 258, max= 282, avg=267.60, stdev= 8.10, samples=10 00:23:07.960 lat (msec) : 20=100.00% 00:23:07.960 cpu : usr=93.76%, sys=5.66%, ctx=7, majf=0, minf=0 00:23:07.960 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.960 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.960 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:07.960 filename0: (groupid=0, jobs=1): err= 0: pid=95742: Sun Aug 11 21:04:17 2024 00:23:07.960 read: IOPS=268, BW=33.5MiB/s (35.1MB/s)(168MiB/5001msec) 00:23:07.960 slat (nsec): min=7137, max=56070, avg=12079.34, stdev=6186.52 00:23:07.960 clat (usec): min=9427, max=12641, avg=11156.13, stdev=397.79 00:23:07.960 lat (usec): min=9435, max=12667, avg=11168.21, stdev=397.76 00:23:07.960 clat percentiles (usec): 00:23:07.960 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:23:07.960 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:23:07.960 | 70.00th=[11338], 80.00th=[11338], 90.00th=[11731], 95.00th=[11731], 00:23:07.960 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12649], 99.95th=[12649], 00:23:07.960 | 99.99th=[12649] 00:23:07.960 bw ( KiB/s): min=33024, max=36096, per=33.27%, avg=34259.80, stdev=1044.99, samples=10 00:23:07.960 iops : min= 258, max= 282, avg=267.60, stdev= 8.10, samples=10 00:23:07.960 lat (msec) : 10=0.22%, 20=99.78% 00:23:07.960 cpu : usr=93.56%, sys=5.78%, ctx=38, majf=0, minf=9 00:23:07.960 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:07.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.960 issued rwts: total=1341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.960 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:07.960 00:23:07.960 Run status group 0 (all jobs): 00:23:07.960 READ: bw=101MiB/s (105MB/s), 33.5MiB/s-33.6MiB/s (35.1MB/s-35.2MB/s), io=503MiB (528MB), run=5001-5005msec 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 bdev_null0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 [2024-08-11 21:04:17.761643] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 bdev_null1 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 bdev_null2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.960 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@552 -- # config=() 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@552 -- # local subsystem config 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:07.961 { 00:23:07.961 "params": { 00:23:07.961 "name": "Nvme$subsystem", 00:23:07.961 "trtype": "$TEST_TRANSPORT", 00:23:07.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.961 "adrfam": "ipv4", 00:23:07.961 "trsvcid": "$NVMF_PORT", 00:23:07.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.961 "hdgst": ${hdgst:-false}, 00:23:07.961 "ddgst": ${ddgst:-false} 00:23:07.961 }, 00:23:07.961 "method": "bdev_nvme_attach_controller" 00:23:07.961 } 00:23:07.961 EOF 00:23:07.961 )") 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # cat 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:07.961 { 00:23:07.961 "params": { 00:23:07.961 "name": "Nvme$subsystem", 00:23:07.961 "trtype": "$TEST_TRANSPORT", 00:23:07.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.961 "adrfam": "ipv4", 00:23:07.961 "trsvcid": "$NVMF_PORT", 00:23:07.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.961 "hdgst": ${hdgst:-false}, 00:23:07.961 "ddgst": ${ddgst:-false} 00:23:07.961 }, 00:23:07.961 "method": "bdev_nvme_attach_controller" 00:23:07.961 } 00:23:07.961 EOF 00:23:07.961 )") 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # cat 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:07.961 { 00:23:07.961 "params": { 00:23:07.961 "name": "Nvme$subsystem", 00:23:07.961 "trtype": "$TEST_TRANSPORT", 00:23:07.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.961 "adrfam": "ipv4", 00:23:07.961 "trsvcid": "$NVMF_PORT", 00:23:07.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.961 "hdgst": ${hdgst:-false}, 00:23:07.961 "ddgst": ${ddgst:-false} 00:23:07.961 }, 00:23:07.961 "method": "bdev_nvme_attach_controller" 00:23:07.961 } 00:23:07.961 EOF 00:23:07.961 )") 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # cat 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@576 -- # jq . 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@577 -- # IFS=, 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:23:07.961 "params": { 00:23:07.961 "name": "Nvme0", 00:23:07.961 "trtype": "tcp", 00:23:07.961 "traddr": "10.0.0.3", 00:23:07.961 "adrfam": "ipv4", 00:23:07.961 "trsvcid": "4420", 00:23:07.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:07.961 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:07.961 "hdgst": false, 00:23:07.961 "ddgst": false 00:23:07.961 }, 00:23:07.961 "method": "bdev_nvme_attach_controller" 00:23:07.961 },{ 00:23:07.961 "params": { 00:23:07.961 "name": "Nvme1", 00:23:07.961 "trtype": "tcp", 00:23:07.961 "traddr": "10.0.0.3", 00:23:07.961 "adrfam": "ipv4", 00:23:07.961 "trsvcid": "4420", 00:23:07.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.961 "hdgst": false, 00:23:07.961 "ddgst": false 00:23:07.961 }, 00:23:07.961 "method": "bdev_nvme_attach_controller" 00:23:07.961 },{ 00:23:07.961 "params": { 00:23:07.961 "name": "Nvme2", 00:23:07.961 "trtype": "tcp", 00:23:07.961 "traddr": "10.0.0.3", 00:23:07.961 "adrfam": "ipv4", 00:23:07.961 "trsvcid": "4420", 00:23:07.961 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.961 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.961 "hdgst": false, 00:23:07.961 "ddgst": false 00:23:07.961 }, 00:23:07.961 "method": "bdev_nvme_attach_controller" 00:23:07.961 }' 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:07.961 21:04:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:07.961 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:07.961 ... 00:23:07.961 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:07.961 ... 00:23:07.961 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:07.961 ... 00:23:07.961 fio-3.35 00:23:07.961 Starting 24 threads 00:23:07.961 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:20.170 00:23:20.170 filename0: (groupid=0, jobs=1): err= 0: pid=95837: Sun Aug 11 21:04:28 2024 00:23:20.170 read: IOPS=205, BW=821KiB/s (841kB/s)(8244KiB/10036msec) 00:23:20.170 slat (usec): min=3, max=8040, avg=43.02, stdev=413.74 00:23:20.170 clat (msec): min=30, max=143, avg=77.69, stdev=22.56 00:23:20.170 lat (msec): min=30, max=143, avg=77.73, stdev=22.56 00:23:20.170 clat percentiles (msec): 00:23:20.170 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:23:20.170 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:23:20.170 | 70.00th=[ 85], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 121], 00:23:20.170 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 134], 00:23:20.170 | 99.99th=[ 144] 00:23:20.170 bw ( KiB/s): min= 584, max= 1128, per=4.24%, avg=817.85, stdev=171.00, samples=20 00:23:20.170 iops : min= 146, max= 282, avg=204.45, stdev=42.73, samples=20 00:23:20.170 lat (msec) : 50=11.11%, 100=67.69%, 250=21.20% 00:23:20.170 cpu : usr=38.05%, sys=1.61%, ctx=1175, majf=0, minf=9 00:23:20.170 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:20.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.170 filename0: (groupid=0, jobs=1): err= 0: pid=95838: Sun Aug 11 21:04:28 2024 00:23:20.170 read: IOPS=203, BW=816KiB/s (835kB/s)(8168KiB/10012msec) 00:23:20.170 slat (usec): min=7, max=8036, avg=42.08, stdev=398.70 00:23:20.170 clat (msec): min=35, max=131, avg=78.23, stdev=20.94 00:23:20.170 lat (msec): min=35, max=131, avg=78.27, stdev=20.94 00:23:20.170 clat percentiles (msec): 00:23:20.170 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:23:20.170 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:23:20.170 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 120], 00:23:20.170 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 132], 00:23:20.170 | 99.99th=[ 132] 00:23:20.170 bw ( KiB/s): min= 664, max= 1080, per=4.21%, avg=812.45, stdev=140.39, samples=20 00:23:20.170 iops : min= 166, max= 270, avg=203.10, stdev=35.10, samples=20 00:23:20.170 lat (msec) : 50=10.14%, 100=71.50%, 250=18.36% 00:23:20.170 cpu : usr=31.87%, sys=1.19%, ctx=952, majf=0, minf=9 00:23:20.170 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:20.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 issued rwts: total=2042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.170 filename0: (groupid=0, jobs=1): err= 0: pid=95839: Sun Aug 11 21:04:28 2024 00:23:20.170 read: IOPS=207, BW=831KiB/s (851kB/s)(8368KiB/10070msec) 00:23:20.170 slat (usec): min=4, max=8036, avg=31.61, stdev=286.28 00:23:20.170 clat (msec): min=6, max=160, avg=76.78, stdev=25.42 00:23:20.170 lat (msec): min=6, max=160, avg=76.81, stdev=25.42 00:23:20.170 clat percentiles (msec): 00:23:20.170 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 56], 00:23:20.170 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:23:20.170 | 70.00th=[ 87], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 121], 00:23:20.170 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 157], 00:23:20.170 | 99.99th=[ 161] 00:23:20.170 bw ( KiB/s): min= 560, max= 1192, per=4.30%, avg=830.10, stdev=198.22, samples=20 00:23:20.170 iops : min= 140, max= 298, avg=207.50, stdev=49.52, samples=20 00:23:20.170 lat (msec) : 10=0.76%, 20=0.76%, 50=14.24%, 100=60.42%, 250=23.80% 00:23:20.170 cpu : usr=36.28%, sys=1.65%, ctx=1214, majf=0, minf=9 00:23:20.170 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:20.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.170 filename0: (groupid=0, jobs=1): err= 0: pid=95840: Sun Aug 11 21:04:28 2024 00:23:20.170 read: IOPS=205, BW=824KiB/s (843kB/s)(8300KiB/10077msec) 00:23:20.170 slat (usec): min=4, max=4056, avg=22.69, stdev=134.60 00:23:20.170 clat (msec): min=5, max=152, avg=77.54, stdev=25.37 00:23:20.170 lat (msec): min=5, max=152, avg=77.56, stdev=25.36 00:23:20.170 clat percentiles (msec): 00:23:20.170 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 60], 00:23:20.170 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:23:20.170 | 70.00th=[ 85], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 121], 00:23:20.170 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:23:20.170 | 99.99th=[ 153] 00:23:20.170 bw ( KiB/s): min= 584, max= 1142, per=4.26%, avg=822.80, stdev=182.33, samples=20 00:23:20.170 iops : min= 146, max= 285, avg=205.65, stdev=45.50, samples=20 00:23:20.170 lat (msec) : 10=1.20%, 20=1.11%, 50=13.06%, 100=61.54%, 250=23.08% 00:23:20.170 cpu : usr=34.19%, sys=1.49%, ctx=925, majf=0, minf=9 00:23:20.170 IO depths : 1=0.2%, 2=0.8%, 4=2.7%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:20.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 issued rwts: total=2075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.170 filename0: (groupid=0, jobs=1): err= 0: pid=95841: Sun Aug 11 21:04:28 2024 00:23:20.170 read: IOPS=203, BW=813KiB/s (832kB/s)(8172KiB/10052msec) 00:23:20.170 slat (usec): min=3, max=8032, avg=29.14, stdev=222.94 00:23:20.170 clat (msec): min=13, max=157, avg=78.49, stdev=22.85 00:23:20.170 lat (msec): min=14, max=157, avg=78.52, stdev=22.85 00:23:20.170 clat percentiles (msec): 00:23:20.170 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 63], 00:23:20.170 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:23:20.170 | 70.00th=[ 83], 80.00th=[ 105], 90.00th=[ 115], 95.00th=[ 121], 00:23:20.170 | 99.00th=[ 124], 99.50th=[ 138], 99.90th=[ 148], 99.95th=[ 157], 00:23:20.170 | 99.99th=[ 157] 00:23:20.170 bw ( KiB/s): min= 616, max= 1072, per=4.20%, avg=810.20, stdev=145.46, samples=20 00:23:20.170 iops : min= 154, max= 268, avg=202.50, stdev=36.34, samples=20 00:23:20.170 lat (msec) : 20=0.69%, 50=7.29%, 100=69.41%, 250=22.61% 00:23:20.170 cpu : usr=44.17%, sys=1.94%, ctx=1312, majf=0, minf=9 00:23:20.170 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:20.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 complete : 0=0.0%, 4=89.0%, 8=9.6%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.170 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.170 filename0: (groupid=0, jobs=1): err= 0: pid=95842: Sun Aug 11 21:04:28 2024 00:23:20.170 read: IOPS=202, BW=809KiB/s (829kB/s)(8128KiB/10044msec) 00:23:20.170 slat (usec): min=5, max=8050, avg=45.69, stdev=373.18 00:23:20.170 clat (msec): min=35, max=155, avg=78.77, stdev=20.89 00:23:20.170 lat (msec): min=35, max=155, avg=78.82, stdev=20.88 00:23:20.170 clat percentiles (msec): 00:23:20.170 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 64], 00:23:20.170 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 80], 00:23:20.170 | 70.00th=[ 84], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 118], 00:23:20.170 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 150], 99.95th=[ 150], 00:23:20.171 | 99.99th=[ 157] 00:23:20.171 bw ( KiB/s): min= 592, max= 992, per=4.18%, avg=806.25, stdev=131.94, samples=20 00:23:20.171 iops : min= 148, max= 248, avg=201.50, stdev=32.93, samples=20 00:23:20.171 lat (msec) : 50=8.91%, 100=70.82%, 250=20.28% 00:23:20.171 cpu : usr=40.13%, sys=1.80%, ctx=1253, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename0: (groupid=0, jobs=1): err= 0: pid=95843: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=197, BW=789KiB/s (808kB/s)(7892KiB/10006msec) 00:23:20.171 slat (usec): min=7, max=12061, avg=52.87, stdev=555.58 00:23:20.171 clat (msec): min=12, max=148, avg=80.89, stdev=23.01 00:23:20.171 lat (msec): min=12, max=148, avg=80.94, stdev=23.00 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:23:20.171 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:23:20.171 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 148], 99.95th=[ 148], 00:23:20.171 | 99.99th=[ 148] 00:23:20.171 bw ( KiB/s): min= 528, max= 1032, per=4.07%, avg=785.42, stdev=158.50, samples=19 00:23:20.171 iops : min= 132, max= 258, avg=196.26, stdev=39.66, samples=19 00:23:20.171 lat (msec) : 20=0.30%, 50=10.75%, 100=64.22%, 250=24.73% 00:23:20.171 cpu : usr=31.79%, sys=1.27%, ctx=961, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=76.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=1973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename0: (groupid=0, jobs=1): err= 0: pid=95844: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=197, BW=792KiB/s (811kB/s)(7932KiB/10020msec) 00:23:20.171 slat (usec): min=5, max=9029, avg=36.68, stdev=372.45 00:23:20.171 clat (msec): min=33, max=131, avg=80.63, stdev=21.24 00:23:20.171 lat (msec): min=33, max=131, avg=80.66, stdev=21.24 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:23:20.171 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:23:20.171 | 70.00th=[ 86], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:23:20.171 | 99.99th=[ 132] 00:23:20.171 bw ( KiB/s): min= 640, max= 1080, per=4.08%, avg=787.00, stdev=140.96, samples=20 00:23:20.171 iops : min= 160, max= 270, avg=196.70, stdev=35.27, samples=20 00:23:20.171 lat (msec) : 50=8.02%, 100=69.64%, 250=22.34% 00:23:20.171 cpu : usr=36.74%, sys=1.51%, ctx=1247, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=2.8%, 4=11.0%, 8=71.9%, 16=14.3%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename1: (groupid=0, jobs=1): err= 0: pid=95845: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=190, BW=763KiB/s (781kB/s)(7688KiB/10075msec) 00:23:20.171 slat (usec): min=4, max=5140, avg=33.15, stdev=229.96 00:23:20.171 clat (msec): min=5, max=156, avg=83.58, stdev=25.06 00:23:20.171 lat (msec): min=5, max=156, avg=83.61, stdev=25.06 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 9], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 68], 00:23:20.171 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 81], 60.00th=[ 89], 00:23:20.171 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 138], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:23:20.171 | 99.99th=[ 157] 00:23:20.171 bw ( KiB/s): min= 528, max= 1272, per=3.95%, avg=761.80, stdev=173.66, samples=20 00:23:20.171 iops : min= 132, max= 318, avg=190.40, stdev=43.38, samples=20 00:23:20.171 lat (msec) : 10=2.39%, 20=0.10%, 50=5.15%, 100=63.79%, 250=28.56% 00:23:20.171 cpu : usr=42.35%, sys=2.05%, ctx=1491, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=2.9%, 4=11.6%, 8=70.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=90.7%, 8=6.8%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename1: (groupid=0, jobs=1): err= 0: pid=95846: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=198, BW=793KiB/s (812kB/s)(7964KiB/10043msec) 00:23:20.171 slat (usec): min=7, max=12035, avg=46.69, stdev=483.17 00:23:20.171 clat (msec): min=35, max=143, avg=80.36, stdev=21.53 00:23:20.171 lat (msec): min=35, max=143, avg=80.40, stdev=21.52 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:23:20.171 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:23:20.171 | 70.00th=[ 86], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 144], 00:23:20.171 | 99.99th=[ 144] 00:23:20.171 bw ( KiB/s): min= 608, max= 1016, per=4.09%, avg=789.80, stdev=129.73, samples=20 00:23:20.171 iops : min= 152, max= 254, avg=197.40, stdev=32.41, samples=20 00:23:20.171 lat (msec) : 50=9.34%, 100=69.26%, 250=21.40% 00:23:20.171 cpu : usr=31.83%, sys=1.32%, ctx=968, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=88.5%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=1991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename1: (groupid=0, jobs=1): err= 0: pid=95847: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=206, BW=824KiB/s (844kB/s)(8260KiB/10020msec) 00:23:20.171 slat (usec): min=5, max=8055, avg=41.22, stdev=374.35 00:23:20.171 clat (msec): min=25, max=134, avg=77.45, stdev=22.08 00:23:20.171 lat (msec): min=25, max=134, avg=77.49, stdev=22.08 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:23:20.171 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:23:20.171 | 70.00th=[ 84], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 132], 00:23:20.171 | 99.99th=[ 134] 00:23:20.171 bw ( KiB/s): min= 616, max= 1112, per=4.25%, avg=819.85, stdev=165.33, samples=20 00:23:20.171 iops : min= 154, max= 278, avg=204.90, stdev=41.31, samples=20 00:23:20.171 lat (msec) : 50=13.27%, 100=67.46%, 250=19.27% 00:23:20.171 cpu : usr=33.16%, sys=1.44%, ctx=892, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename1: (groupid=0, jobs=1): err= 0: pid=95848: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=206, BW=826KiB/s (846kB/s)(8308KiB/10053msec) 00:23:20.171 slat (usec): min=6, max=4725, avg=25.14, stdev=152.50 00:23:20.171 clat (msec): min=25, max=159, avg=77.27, stdev=23.78 00:23:20.171 lat (msec): min=25, max=159, avg=77.29, stdev=23.78 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:23:20.171 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 80], 00:23:20.171 | 70.00th=[ 84], 80.00th=[ 105], 90.00th=[ 117], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 126], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:23:20.171 | 99.99th=[ 161] 00:23:20.171 bw ( KiB/s): min= 608, max= 1104, per=4.27%, avg=823.70, stdev=184.62, samples=20 00:23:20.171 iops : min= 152, max= 276, avg=205.90, stdev=46.12, samples=20 00:23:20.171 lat (msec) : 50=13.14%, 100=65.19%, 250=21.67% 00:23:20.171 cpu : usr=40.48%, sys=1.61%, ctx=1291, majf=0, minf=9 00:23:20.171 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename1: (groupid=0, jobs=1): err= 0: pid=95849: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=201, BW=807KiB/s (826kB/s)(8108KiB/10052msec) 00:23:20.171 slat (usec): min=6, max=8031, avg=27.40, stdev=255.80 00:23:20.171 clat (msec): min=25, max=144, avg=79.10, stdev=22.33 00:23:20.171 lat (msec): min=25, max=144, avg=79.13, stdev=22.33 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:23:20.171 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:23:20.171 | 70.00th=[ 85], 80.00th=[ 103], 90.00th=[ 117], 95.00th=[ 121], 00:23:20.171 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 146], 00:23:20.171 | 99.99th=[ 146] 00:23:20.171 bw ( KiB/s): min= 640, max= 1048, per=4.16%, avg=803.80, stdev=147.07, samples=20 00:23:20.171 iops : min= 160, max= 262, avg=200.90, stdev=36.76, samples=20 00:23:20.171 lat (msec) : 50=10.31%, 100=68.92%, 250=20.77% 00:23:20.171 cpu : usr=35.31%, sys=1.68%, ctx=1127, majf=0, minf=10 00:23:20.171 IO depths : 1=0.1%, 2=1.5%, 4=6.1%, 8=77.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.171 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.171 filename1: (groupid=0, jobs=1): err= 0: pid=95850: Sun Aug 11 21:04:28 2024 00:23:20.171 read: IOPS=208, BW=832KiB/s (852kB/s)(8352KiB/10034msec) 00:23:20.171 slat (usec): min=6, max=8044, avg=30.30, stdev=226.43 00:23:20.171 clat (msec): min=32, max=140, avg=76.71, stdev=22.23 00:23:20.171 lat (msec): min=32, max=140, avg=76.74, stdev=22.23 00:23:20.171 clat percentiles (msec): 00:23:20.171 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:23:20.172 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:23:20.172 | 70.00th=[ 84], 80.00th=[ 100], 90.00th=[ 113], 95.00th=[ 120], 00:23:20.172 | 99.00th=[ 124], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:23:20.172 | 99.99th=[ 142] 00:23:20.172 bw ( KiB/s): min= 616, max= 1104, per=4.29%, avg=828.70, stdev=171.23, samples=20 00:23:20.172 iops : min= 154, max= 276, avg=207.15, stdev=42.80, samples=20 00:23:20.172 lat (msec) : 50=13.79%, 100=66.76%, 250=19.44% 00:23:20.172 cpu : usr=38.98%, sys=1.66%, ctx=1153, majf=0, minf=9 00:23:20.172 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename1: (groupid=0, jobs=1): err= 0: pid=95851: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=209, BW=840KiB/s (860kB/s)(8416KiB/10020msec) 00:23:20.172 slat (usec): min=5, max=9057, avg=39.17, stdev=350.56 00:23:20.172 clat (msec): min=25, max=144, avg=76.02, stdev=22.47 00:23:20.172 lat (msec): min=25, max=144, avg=76.06, stdev=22.46 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:23:20.172 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:23:20.172 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 120], 00:23:20.172 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 136], 00:23:20.172 | 99.99th=[ 144] 00:23:20.172 bw ( KiB/s): min= 616, max= 1112, per=4.33%, avg=835.45, stdev=168.63, samples=20 00:23:20.172 iops : min= 154, max= 278, avg=208.80, stdev=42.13, samples=20 00:23:20.172 lat (msec) : 50=15.26%, 100=66.21%, 250=18.54% 00:23:20.172 cpu : usr=35.30%, sys=1.71%, ctx=1014, majf=0, minf=9 00:23:20.172 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename1: (groupid=0, jobs=1): err= 0: pid=95852: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=196, BW=787KiB/s (805kB/s)(7880KiB/10019msec) 00:23:20.172 slat (usec): min=4, max=8045, avg=35.45, stdev=339.56 00:23:20.172 clat (msec): min=34, max=149, avg=81.17, stdev=21.41 00:23:20.172 lat (msec): min=34, max=149, avg=81.20, stdev=21.41 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 40], 5.00th=[ 49], 10.00th=[ 60], 20.00th=[ 65], 00:23:20.172 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:23:20.172 | 70.00th=[ 88], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 121], 00:23:20.172 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 150], 00:23:20.172 | 99.99th=[ 150] 00:23:20.172 bw ( KiB/s): min= 624, max= 1016, per=4.05%, avg=781.60, stdev=133.59, samples=20 00:23:20.172 iops : min= 156, max= 254, avg=195.40, stdev=33.40, samples=20 00:23:20.172 lat (msec) : 50=6.14%, 100=71.98%, 250=21.88% 00:23:20.172 cpu : usr=35.01%, sys=1.61%, ctx=1063, majf=0, minf=9 00:23:20.172 IO depths : 1=0.1%, 2=2.4%, 4=9.8%, 8=73.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename2: (groupid=0, jobs=1): err= 0: pid=95853: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=213, BW=855KiB/s (876kB/s)(8564KiB/10011msec) 00:23:20.172 slat (usec): min=4, max=12060, avg=37.57, stdev=346.77 00:23:20.172 clat (msec): min=16, max=129, avg=74.66, stdev=23.26 00:23:20.172 lat (msec): min=16, max=129, avg=74.70, stdev=23.27 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 31], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 53], 00:23:20.172 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:23:20.172 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 114], 95.00th=[ 120], 00:23:20.172 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 130], 00:23:20.172 | 99.99th=[ 130] 00:23:20.172 bw ( KiB/s): min= 664, max= 1158, per=4.43%, avg=855.89, stdev=169.87, samples=19 00:23:20.172 iops : min= 166, max= 289, avg=213.89, stdev=42.41, samples=19 00:23:20.172 lat (msec) : 20=0.28%, 50=16.63%, 100=64.13%, 250=18.96% 00:23:20.172 cpu : usr=42.67%, sys=1.76%, ctx=1234, majf=0, minf=9 00:23:20.172 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=2141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename2: (groupid=0, jobs=1): err= 0: pid=95854: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=170, BW=683KiB/s (699kB/s)(6864KiB/10053msec) 00:23:20.172 slat (usec): min=4, max=8046, avg=34.75, stdev=348.82 00:23:20.172 clat (msec): min=32, max=168, avg=93.43, stdev=25.16 00:23:20.172 lat (msec): min=32, max=168, avg=93.47, stdev=25.16 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 40], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 72], 00:23:20.172 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 97], 00:23:20.172 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 144], 00:23:20.172 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:23:20.172 | 99.99th=[ 169] 00:23:20.172 bw ( KiB/s): min= 496, max= 896, per=3.52%, avg=679.50, stdev=147.13, samples=20 00:23:20.172 iops : min= 124, max= 224, avg=169.80, stdev=36.69, samples=20 00:23:20.172 lat (msec) : 50=1.98%, 100=59.62%, 250=38.40% 00:23:20.172 cpu : usr=38.00%, sys=1.58%, ctx=1012, majf=0, minf=9 00:23:20.172 IO depths : 1=0.1%, 2=6.1%, 4=24.8%, 8=56.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=94.5%, 8=0.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename2: (groupid=0, jobs=1): err= 0: pid=95855: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=187, BW=751KiB/s (769kB/s)(7544KiB/10048msec) 00:23:20.172 slat (usec): min=5, max=8036, avg=51.34, stdev=480.48 00:23:20.172 clat (msec): min=37, max=155, avg=84.86, stdev=22.55 00:23:20.172 lat (msec): min=37, max=155, avg=84.91, stdev=22.56 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:23:20.172 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:23:20.172 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:23:20.172 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 157], 00:23:20.172 | 99.99th=[ 157] 00:23:20.172 bw ( KiB/s): min= 512, max= 1024, per=3.87%, avg=747.80, stdev=145.65, samples=20 00:23:20.172 iops : min= 128, max= 256, avg=186.90, stdev=36.40, samples=20 00:23:20.172 lat (msec) : 50=7.32%, 100=64.69%, 250=28.00% 00:23:20.172 cpu : usr=36.13%, sys=1.52%, ctx=874, majf=0, minf=9 00:23:20.172 IO depths : 1=0.2%, 2=2.2%, 4=8.9%, 8=73.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=1886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename2: (groupid=0, jobs=1): err= 0: pid=95856: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=214, BW=860KiB/s (880kB/s)(8608KiB/10013msec) 00:23:20.172 slat (usec): min=3, max=4119, avg=31.42, stdev=194.53 00:23:20.172 clat (msec): min=12, max=177, avg=74.31, stdev=24.16 00:23:20.172 lat (msec): min=12, max=177, avg=74.34, stdev=24.16 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 32], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:23:20.172 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:23:20.172 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 120], 00:23:20.172 | 99.00th=[ 127], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 178], 00:23:20.172 | 99.99th=[ 178] 00:23:20.172 bw ( KiB/s): min= 656, max= 1176, per=4.42%, avg=853.90, stdev=190.75, samples=20 00:23:20.172 iops : min= 164, max= 294, avg=213.45, stdev=47.65, samples=20 00:23:20.172 lat (msec) : 20=0.28%, 50=17.70%, 100=63.57%, 250=18.45% 00:23:20.172 cpu : usr=41.83%, sys=1.82%, ctx=1259, majf=0, minf=9 00:23:20.172 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=2152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename2: (groupid=0, jobs=1): err= 0: pid=95857: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=206, BW=826KiB/s (846kB/s)(8320KiB/10070msec) 00:23:20.172 slat (usec): min=4, max=374, avg=19.60, stdev=13.31 00:23:20.172 clat (msec): min=2, max=144, avg=77.18, stdev=25.47 00:23:20.172 lat (msec): min=2, max=144, avg=77.20, stdev=25.47 00:23:20.172 clat percentiles (msec): 00:23:20.172 | 1.00th=[ 7], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 60], 00:23:20.172 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:23:20.172 | 70.00th=[ 85], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 121], 00:23:20.172 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 136], 99.95th=[ 138], 00:23:20.172 | 99.99th=[ 144] 00:23:20.172 bw ( KiB/s): min= 608, max= 1269, per=4.29%, avg=827.20, stdev=181.87, samples=20 00:23:20.172 iops : min= 152, max= 317, avg=206.75, stdev=45.39, samples=20 00:23:20.172 lat (msec) : 4=0.10%, 10=2.98%, 50=10.58%, 100=63.61%, 250=22.74% 00:23:20.172 cpu : usr=39.05%, sys=1.57%, ctx=1117, majf=0, minf=9 00:23:20.172 IO depths : 1=0.2%, 2=1.1%, 4=3.4%, 8=79.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:20.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.172 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.172 filename2: (groupid=0, jobs=1): err= 0: pid=95858: Sun Aug 11 21:04:28 2024 00:23:20.172 read: IOPS=196, BW=786KiB/s (804kB/s)(7916KiB/10077msec) 00:23:20.172 slat (usec): min=4, max=8055, avg=30.22, stdev=312.56 00:23:20.172 clat (msec): min=8, max=157, avg=81.20, stdev=25.52 00:23:20.172 lat (msec): min=8, max=157, avg=81.23, stdev=25.52 00:23:20.172 clat percentiles (msec): 00:23:20.173 | 1.00th=[ 10], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 62], 00:23:20.173 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:23:20.173 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 121], 00:23:20.173 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 157], 00:23:20.173 | 99.99th=[ 157] 00:23:20.173 bw ( KiB/s): min= 512, max= 1269, per=4.06%, avg=784.45, stdev=188.96, samples=20 00:23:20.173 iops : min= 128, max= 317, avg=196.05, stdev=47.19, samples=20 00:23:20.173 lat (msec) : 10=1.92%, 20=0.51%, 50=7.38%, 100=63.72%, 250=26.48% 00:23:20.173 cpu : usr=36.31%, sys=1.55%, ctx=1133, majf=0, minf=9 00:23:20.173 IO depths : 1=0.1%, 2=2.3%, 4=9.0%, 8=73.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.173 complete : 0=0.0%, 4=90.0%, 8=8.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.173 issued rwts: total=1979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.173 filename2: (groupid=0, jobs=1): err= 0: pid=95859: Sun Aug 11 21:04:28 2024 00:23:20.173 read: IOPS=214, BW=857KiB/s (877kB/s)(8576KiB/10011msec) 00:23:20.173 slat (usec): min=4, max=7035, avg=29.21, stdev=213.84 00:23:20.173 clat (msec): min=24, max=178, avg=74.56, stdev=23.98 00:23:20.173 lat (msec): min=24, max=178, avg=74.59, stdev=23.98 00:23:20.173 clat percentiles (msec): 00:23:20.173 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 51], 00:23:20.173 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:23:20.173 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 113], 95.00th=[ 121], 00:23:20.173 | 99.00th=[ 129], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 178], 00:23:20.173 | 99.99th=[ 178] 00:23:20.173 bw ( KiB/s): min= 657, max= 1176, per=4.42%, avg=853.20, stdev=189.05, samples=20 00:23:20.173 iops : min= 164, max= 294, avg=213.25, stdev=47.22, samples=20 00:23:20.173 lat (msec) : 50=19.26%, 100=62.08%, 250=18.66% 00:23:20.173 cpu : usr=39.54%, sys=1.78%, ctx=1149, majf=0, minf=9 00:23:20.173 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.173 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.173 issued rwts: total=2144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.173 filename2: (groupid=0, jobs=1): err= 0: pid=95860: Sun Aug 11 21:04:28 2024 00:23:20.173 read: IOPS=194, BW=778KiB/s (797kB/s)(7796KiB/10020msec) 00:23:20.173 slat (usec): min=4, max=8047, avg=41.88, stdev=359.04 00:23:20.173 clat (msec): min=33, max=168, avg=81.94, stdev=21.71 00:23:20.173 lat (msec): min=33, max=168, avg=81.98, stdev=21.70 00:23:20.173 clat percentiles (msec): 00:23:20.173 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 55], 20.00th=[ 65], 00:23:20.173 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:23:20.173 | 70.00th=[ 92], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 122], 00:23:20.173 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 169], 99.95th=[ 169], 00:23:20.173 | 99.99th=[ 169] 00:23:20.173 bw ( KiB/s): min= 513, max= 1024, per=4.01%, avg=773.40, stdev=145.29, samples=20 00:23:20.173 iops : min= 128, max= 256, avg=193.30, stdev=36.33, samples=20 00:23:20.173 lat (msec) : 50=6.62%, 100=69.16%, 250=24.22% 00:23:20.173 cpu : usr=37.67%, sys=1.64%, ctx=1200, majf=0, minf=9 00:23:20.173 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:20.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.173 complete : 0=0.0%, 4=89.5%, 8=8.6%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.173 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:20.173 00:23:20.173 Run status group 0 (all jobs): 00:23:20.173 READ: bw=18.8MiB/s (19.8MB/s), 683KiB/s-860KiB/s (699kB/s-880kB/s), io=190MiB (199MB), run=10006-10077msec 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 bdev_null0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 [2024-08-11 21:04:29.264871] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 bdev_null1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.173 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@552 -- # config=() 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@552 -- # local subsystem config 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:20.174 { 00:23:20.174 "params": { 00:23:20.174 "name": "Nvme$subsystem", 00:23:20.174 "trtype": "$TEST_TRANSPORT", 00:23:20.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.174 "adrfam": "ipv4", 00:23:20.174 "trsvcid": "$NVMF_PORT", 00:23:20.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.174 "hdgst": ${hdgst:-false}, 00:23:20.174 "ddgst": ${ddgst:-false} 00:23:20.174 }, 00:23:20.174 "method": "bdev_nvme_attach_controller" 00:23:20.174 } 00:23:20.174 EOF 00:23:20.174 )") 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # cat 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:20.174 { 00:23:20.174 "params": { 00:23:20.174 "name": "Nvme$subsystem", 00:23:20.174 "trtype": "$TEST_TRANSPORT", 00:23:20.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.174 "adrfam": "ipv4", 00:23:20.174 "trsvcid": "$NVMF_PORT", 00:23:20.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.174 "hdgst": ${hdgst:-false}, 00:23:20.174 "ddgst": ${ddgst:-false} 00:23:20.174 }, 00:23:20.174 "method": "bdev_nvme_attach_controller" 00:23:20.174 } 00:23:20.174 EOF 00:23:20.174 )") 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@574 -- # cat 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@576 -- # jq . 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@577 -- # IFS=, 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:23:20.174 "params": { 00:23:20.174 "name": "Nvme0", 00:23:20.174 "trtype": "tcp", 00:23:20.174 "traddr": "10.0.0.3", 00:23:20.174 "adrfam": "ipv4", 00:23:20.174 "trsvcid": "4420", 00:23:20.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:20.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:20.174 "hdgst": false, 00:23:20.174 "ddgst": false 00:23:20.174 }, 00:23:20.174 "method": "bdev_nvme_attach_controller" 00:23:20.174 },{ 00:23:20.174 "params": { 00:23:20.174 "name": "Nvme1", 00:23:20.174 "trtype": "tcp", 00:23:20.174 "traddr": "10.0.0.3", 00:23:20.174 "adrfam": "ipv4", 00:23:20.174 "trsvcid": "4420", 00:23:20.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.174 "hdgst": false, 00:23:20.174 "ddgst": false 00:23:20.174 }, 00:23:20.174 "method": "bdev_nvme_attach_controller" 00:23:20.174 }' 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:20.174 21:04:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:20.174 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:20.174 ... 00:23:20.174 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:20.174 ... 00:23:20.174 fio-3.35 00:23:20.174 Starting 4 threads 00:23:20.174 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:24.374 00:23:24.374 filename0: (groupid=0, jobs=1): err= 0: pid=95998: Sun Aug 11 21:04:35 2024 00:23:24.374 read: IOPS=1938, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5001msec) 00:23:24.374 slat (nsec): min=6694, max=79298, avg=18179.32, stdev=10614.85 00:23:24.374 clat (usec): min=661, max=6929, avg=4066.99, stdev=1033.31 00:23:24.374 lat (usec): min=675, max=6954, avg=4085.17, stdev=1031.12 00:23:24.374 clat percentiles (usec): 00:23:24.374 | 1.00th=[ 1663], 5.00th=[ 1991], 10.00th=[ 2212], 20.00th=[ 2999], 00:23:24.374 | 30.00th=[ 3687], 40.00th=[ 4490], 50.00th=[ 4621], 60.00th=[ 4686], 00:23:24.374 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5014], 00:23:24.374 | 99.00th=[ 5211], 99.50th=[ 5473], 99.90th=[ 5997], 99.95th=[ 6194], 00:23:24.374 | 99.99th=[ 6915] 00:23:24.374 bw ( KiB/s): min=13056, max=19552, per=23.37%, avg=15772.44, stdev=2610.71, samples=9 00:23:24.374 iops : min= 1632, max= 2444, avg=1971.56, stdev=326.34, samples=9 00:23:24.374 lat (usec) : 750=0.03%, 1000=0.10% 00:23:24.374 lat (msec) : 2=5.28%, 4=28.09%, 10=66.49% 00:23:24.374 cpu : usr=93.80%, sys=5.30%, ctx=13, majf=0, minf=0 00:23:24.374 IO depths : 1=0.4%, 2=12.9%, 4=56.6%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 issued rwts: total=9693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:24.374 filename0: (groupid=0, jobs=1): err= 0: pid=95999: Sun Aug 11 21:04:35 2024 00:23:24.374 read: IOPS=2022, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5002msec) 00:23:24.374 slat (nsec): min=5141, max=66182, avg=22727.62, stdev=8916.25 00:23:24.374 clat (usec): min=1145, max=6734, avg=3889.02, stdev=1031.62 00:23:24.374 lat (usec): min=1155, max=6752, avg=3911.75, stdev=1029.88 00:23:24.374 clat percentiles (usec): 00:23:24.374 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2638], 00:23:24.374 | 30.00th=[ 2933], 40.00th=[ 4080], 50.00th=[ 4490], 60.00th=[ 4555], 00:23:24.374 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5014], 00:23:24.374 | 99.00th=[ 5211], 99.50th=[ 5669], 99.90th=[ 6063], 99.95th=[ 6128], 00:23:24.374 | 99.99th=[ 6718] 00:23:24.374 bw ( KiB/s): min=13056, max=18208, per=23.70%, avg=15994.67, stdev=2130.93, samples=9 00:23:24.374 iops : min= 1632, max= 2276, avg=1999.33, stdev=266.37, samples=9 00:23:24.374 lat (msec) : 2=0.93%, 4=38.36%, 10=60.71% 00:23:24.374 cpu : usr=93.72%, sys=5.28%, ctx=11, majf=0, minf=10 00:23:24.374 IO depths : 1=0.4%, 2=9.1%, 4=58.7%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 issued rwts: total=10116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:24.374 filename1: (groupid=0, jobs=1): err= 0: pid=96000: Sun Aug 11 21:04:35 2024 00:23:24.374 read: IOPS=2286, BW=17.9MiB/s (18.7MB/s)(89.4MiB/5002msec) 00:23:24.374 slat (nsec): min=6540, max=81127, avg=16889.09, stdev=10300.52 00:23:24.374 clat (usec): min=446, max=6755, avg=3456.95, stdev=1046.78 00:23:24.374 lat (usec): min=459, max=6780, avg=3473.84, stdev=1045.75 00:23:24.374 clat percentiles (usec): 00:23:24.374 | 1.00th=[ 1827], 5.00th=[ 2008], 10.00th=[ 2089], 20.00th=[ 2311], 00:23:24.374 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 3195], 60.00th=[ 4080], 00:23:24.374 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:23:24.374 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5932], 99.95th=[ 6521], 00:23:24.374 | 99.99th=[ 6587] 00:23:24.374 bw ( KiB/s): min=18032, max=19440, per=27.19%, avg=18355.56, stdev=435.97, samples=9 00:23:24.374 iops : min= 2254, max= 2430, avg=2294.44, stdev=54.50, samples=9 00:23:24.374 lat (usec) : 500=0.01%, 1000=0.04% 00:23:24.374 lat (msec) : 2=4.59%, 4=54.83%, 10=40.53% 00:23:24.374 cpu : usr=93.74%, sys=5.24%, ctx=63, majf=0, minf=0 00:23:24.374 IO depths : 1=0.2%, 2=0.8%, 4=63.3%, 8=35.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 complete : 0=0.0%, 4=99.7%, 8=0.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 issued rwts: total=11437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:24.374 filename1: (groupid=0, jobs=1): err= 0: pid=96001: Sun Aug 11 21:04:35 2024 00:23:24.374 read: IOPS=2190, BW=17.1MiB/s (17.9MB/s)(85.6MiB/5002msec) 00:23:24.374 slat (nsec): min=6671, max=81225, avg=22694.99, stdev=10778.14 00:23:24.374 clat (usec): min=1589, max=7038, avg=3595.27, stdev=1034.69 00:23:24.374 lat (usec): min=1608, max=7075, avg=3617.96, stdev=1034.43 00:23:24.374 clat percentiles (usec): 00:23:24.374 | 1.00th=[ 1876], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2442], 00:23:24.374 | 30.00th=[ 2671], 40.00th=[ 2966], 50.00th=[ 3818], 60.00th=[ 4424], 00:23:24.374 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4817], 00:23:24.374 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5407], 99.95th=[ 5473], 00:23:24.374 | 99.99th=[ 6915] 00:23:24.374 bw ( KiB/s): min=14989, max=18416, per=25.89%, avg=17478.78, stdev=1145.80, samples=9 00:23:24.374 iops : min= 1873, max= 2302, avg=2184.78, stdev=143.39, samples=9 00:23:24.374 lat (msec) : 2=3.43%, 4=48.02%, 10=48.55% 00:23:24.374 cpu : usr=93.78%, sys=5.24%, ctx=9, majf=0, minf=9 00:23:24.374 IO depths : 1=0.4%, 2=3.2%, 4=61.9%, 8=34.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.374 issued rwts: total=10956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.375 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:24.375 00:23:24.375 Run status group 0 (all jobs): 00:23:24.375 READ: bw=65.9MiB/s (69.1MB/s), 15.1MiB/s-17.9MiB/s (15.9MB/s-18.7MB/s), io=330MiB (346MB), run=5001-5002msec 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.634 00:23:24.634 real 0m23.622s 00:23:24.634 user 2m5.330s 00:23:24.634 sys 0m6.740s 00:23:24.634 ************************************ 00:23:24.634 END TEST fio_dif_rand_params 00:23:24.634 ************************************ 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 21:04:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:24.634 21:04:35 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:24.634 21:04:35 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 ************************************ 00:23:24.634 START TEST fio_dif_digest 00:23:24.634 ************************************ 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.634 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:24.634 bdev_null0 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.635 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:24.893 [2024-08-11 21:04:35.418138] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@552 -- # config=() 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@552 -- # local subsystem config 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:24.893 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # for subsystem in "${@:-1}" 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@574 -- # config+=("$(cat <<-EOF 00:23:24.894 { 00:23:24.894 "params": { 00:23:24.894 "name": "Nvme$subsystem", 00:23:24.894 "trtype": "$TEST_TRANSPORT", 00:23:24.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.894 "adrfam": "ipv4", 00:23:24.894 "trsvcid": "$NVMF_PORT", 00:23:24.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.894 "hdgst": ${hdgst:-false}, 00:23:24.894 "ddgst": ${ddgst:-false} 00:23:24.894 }, 00:23:24.894 "method": "bdev_nvme_attach_controller" 00:23:24.894 } 00:23:24.894 EOF 00:23:24.894 )") 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@574 -- # cat 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@576 -- # jq . 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@577 -- # IFS=, 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # printf '%s\n' '{ 00:23:24.894 "params": { 00:23:24.894 "name": "Nvme0", 00:23:24.894 "trtype": "tcp", 00:23:24.894 "traddr": "10.0.0.3", 00:23:24.894 "adrfam": "ipv4", 00:23:24.894 "trsvcid": "4420", 00:23:24.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:24.894 "hdgst": true, 00:23:24.894 "ddgst": true 00:23:24.894 }, 00:23:24.894 "method": "bdev_nvme_attach_controller" 00:23:24.894 }' 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:24.894 21:04:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.894 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:24.894 ... 00:23:24.894 fio-3.35 00:23:24.894 Starting 3 threads 00:23:24.894 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:37.099 00:23:37.099 filename0: (groupid=0, jobs=1): err= 0: pid=96108: Sun Aug 11 21:04:46 2024 00:23:37.099 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(289MiB/10010msec) 00:23:37.099 slat (nsec): min=6780, max=57031, avg=11788.28, stdev=6459.14 00:23:37.099 clat (usec): min=9897, max=14531, avg=12952.18, stdev=287.22 00:23:37.099 lat (usec): min=9905, max=14551, avg=12963.97, stdev=287.44 00:23:37.099 clat percentiles (usec): 00:23:37.099 | 1.00th=[12256], 5.00th=[12518], 10.00th=[12649], 20.00th=[12780], 00:23:37.099 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:23:37.099 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:23:37.099 | 99.00th=[13698], 99.50th=[13698], 99.90th=[14484], 99.95th=[14484], 00:23:37.099 | 99.99th=[14484] 00:23:37.099 bw ( KiB/s): min=29184, max=30720, per=33.36%, avg=29585.05, stdev=467.46, samples=19 00:23:37.099 iops : min= 228, max= 240, avg=231.11, stdev= 3.63, samples=19 00:23:37.099 lat (msec) : 10=0.13%, 20=99.87% 00:23:37.099 cpu : usr=93.52%, sys=5.85%, ctx=41, majf=0, minf=0 00:23:37.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.099 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:37.099 filename0: (groupid=0, jobs=1): err= 0: pid=96109: Sun Aug 11 21:04:46 2024 00:23:37.099 read: IOPS=230, BW=28.9MiB/s (30.3MB/s)(289MiB/10002msec) 00:23:37.099 slat (nsec): min=7345, max=72966, avg=17065.71, stdev=13602.18 00:23:37.099 clat (usec): min=12151, max=14647, avg=12940.40, stdev=273.87 00:23:37.099 lat (usec): min=12161, max=14687, avg=12957.47, stdev=273.47 00:23:37.099 clat percentiles (usec): 00:23:37.099 | 1.00th=[12256], 5.00th=[12518], 10.00th=[12649], 20.00th=[12780], 00:23:37.099 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:23:37.099 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:23:37.099 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14615], 99.95th=[14615], 00:23:37.099 | 99.99th=[14615] 00:23:37.099 bw ( KiB/s): min=29184, max=30720, per=33.36%, avg=29588.21, stdev=469.84, samples=19 00:23:37.099 iops : min= 228, max= 240, avg=231.16, stdev= 3.67, samples=19 00:23:37.099 lat (msec) : 20=100.00% 00:23:37.099 cpu : usr=92.57%, sys=6.59%, ctx=84, majf=0, minf=0 00:23:37.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.099 issued rwts: total=2310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:37.099 filename0: (groupid=0, jobs=1): err= 0: pid=96110: Sun Aug 11 21:04:46 2024 00:23:37.100 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(289MiB/10007msec) 00:23:37.100 slat (nsec): min=5658, max=50621, avg=10773.60, stdev=4906.93 00:23:37.100 clat (usec): min=7060, max=14677, avg=12952.43, stdev=346.83 00:23:37.100 lat (usec): min=7068, max=14697, avg=12963.20, stdev=346.82 00:23:37.100 clat percentiles (usec): 00:23:37.100 | 1.00th=[12256], 5.00th=[12518], 10.00th=[12649], 20.00th=[12780], 00:23:37.100 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:23:37.100 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13304], 95.00th=[13435], 00:23:37.100 | 99.00th=[13566], 99.50th=[13698], 99.90th=[14615], 99.95th=[14615], 00:23:37.100 | 99.99th=[14615] 00:23:37.100 bw ( KiB/s): min=29184, max=29952, per=33.36%, avg=29588.21, stdev=393.98, samples=19 00:23:37.100 iops : min= 228, max= 234, avg=231.16, stdev= 3.08, samples=19 00:23:37.100 lat (msec) : 10=0.13%, 20=99.87% 00:23:37.100 cpu : usr=92.75%, sys=6.64%, ctx=12, majf=0, minf=0 00:23:37.100 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:37.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:37.100 issued rwts: total=2313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:37.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:37.100 00:23:37.100 Run status group 0 (all jobs): 00:23:37.100 READ: bw=86.6MiB/s (90.8MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=867MiB (909MB), run=10002-10010msec 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:37.100 ************************************ 00:23:37.100 END TEST fio_dif_digest 00:23:37.100 ************************************ 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:37.100 00:23:37.100 real 0m11.001s 00:23:37.100 user 0m28.548s 00:23:37.100 sys 0m2.186s 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:37.100 21:04:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:37.100 21:04:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:37.100 21:04:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@508 -- # nvmfcleanup 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.100 rmmod nvme_tcp 00:23:37.100 rmmod nvme_fabrics 00:23:37.100 rmmod nvme_keyring 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@509 -- # '[' -n 95355 ']' 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@510 -- # killprocess 95355 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 95355 ']' 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 95355 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95355 00:23:37.100 killing process with pid 95355 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95355' 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@965 -- # kill 95355 00:23:37.100 21:04:46 nvmf_dif -- common/autotest_common.sh@970 -- # wait 95355 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@512 -- # '[' iso == iso ']' 00:23:37.100 21:04:46 nvmf_dif -- nvmf/common.sh@513 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:37.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:37.100 Waiting for block devices as requested 00:23:37.100 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:37.100 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@293 -- # iptr 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@783 -- # iptables-save 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@783 -- # iptables-restore 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@242 -- # remove_spdk_ns 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.100 21:04:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:37.100 21:04:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.100 21:04:47 nvmf_dif -- nvmf/common.sh@296 -- # return 0 00:23:37.100 00:23:37.100 real 1m0.537s 00:23:37.100 user 3m50.033s 00:23:37.100 sys 0m18.345s 00:23:37.100 21:04:47 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:37.100 21:04:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:37.100 ************************************ 00:23:37.100 END TEST nvmf_dif 00:23:37.100 ************************************ 00:23:37.100 21:04:47 -- spdk/autotest.sh@299 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:37.100 21:04:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:37.100 21:04:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:37.100 21:04:47 -- common/autotest_common.sh@10 -- # set +x 00:23:37.100 ************************************ 00:23:37.100 START TEST nvmf_abort_qd_sizes 00:23:37.100 ************************************ 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:37.100 * Looking for test storage... 00:23:37.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.100 21:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # '[' -z tcp ']' 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@466 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # prepare_net_devs 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # local -g is_hw=no 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # remove_spdk_ns 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # [[ virt != virt ]] 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # [[ no == yes ]] 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ virt == phy ]] 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # [[ virt == phy-fallback ]] 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ tcp == tcp ]] 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # nvmf_veth_init 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_BRIDGE=nvmf_br 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_init_br nomaster 00:23:37.101 Cannot find device "nvmf_init_br" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_init_br2 nomaster 00:23:37.101 Cannot find device "nvmf_init_br2" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link set nvmf_tgt_br nomaster 00:23:37.101 Cannot find device "nvmf_tgt_br" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.101 Cannot find device "nvmf_tgt_br2" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br down 00:23:37.101 Cannot find device "nvmf_init_br" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 down 00:23:37.101 Cannot find device "nvmf_init_br2" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br down 00:23:37.101 Cannot find device "nvmf_tgt_br" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 down 00:23:37.101 Cannot find device "nvmf_tgt_br2" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link delete nvmf_br type bridge 00:23:37.101 Cannot find device "nvmf_br" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link delete nvmf_init_if 00:23:37.101 Cannot find device "nvmf_init_if" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link delete nvmf_init_if2 00:23:37.101 Cannot find device "nvmf_init_if2" 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.101 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@176 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link set nvmf_init_if up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_init_if2 up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # ip link set nvmf_init_br up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@195 -- # ip link set nvmf_init_br2 up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip link add nvmf_br type bridge 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip link set nvmf_br up 00:23:37.361 21:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@210 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@782 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@215 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@782 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ping -c 1 10.0.0.3 00:23:37.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:23:37.361 00:23:37.361 --- 10.0.0.3 ping statistics --- 00:23:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.361 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ping -c 1 10.0.0.4 00:23:37.361 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:37.361 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:23:37.361 00:23:37.361 --- 10.0.0.4 ping statistics --- 00:23:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.361 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@220 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:37.361 00:23:37.361 --- 10.0.0.1 ping statistics --- 00:23:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.361 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@221 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:37.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:23:37.361 00:23:37.361 --- 10.0.0.2 ping statistics --- 00:23:37.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.361 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@453 -- # return 0 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # '[' iso == iso ']' 00:23:37.361 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:38.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:38.297 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:38.297 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:38.297 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # [[ tcp == \r\d\m\a ]] 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@485 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # '[' tcp == tcp ']' 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # modprobe nvme-tcp 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@499 -- # timing_enter start_nvmf_tgt 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:38.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@501 -- # nvmfpid=96750 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # waitforlisten 96750 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 96750 ']' 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.298 21:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:38.298 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:38.298 [2024-08-11 21:04:49.016728] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:23:38.298 [2024-08-11 21:04:49.017636] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.556 [2024-08-11 21:04:49.159393] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:38.556 [2024-08-11 21:04:49.255386] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.556 [2024-08-11 21:04:49.255808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.556 [2024-08-11 21:04:49.255963] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.556 [2024-08-11 21:04:49.255979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.556 [2024-08-11 21:04:49.255988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.556 [2024-08-11 21:04:49.256065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.556 [2024-08-11 21:04:49.256333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.556 [2024-08-11 21:04:49.256458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.556 [2024-08-11 21:04:49.256469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.556 [2024-08-11 21:04:49.313841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_exit start_nvmf_tgt 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:39.492 21:04:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:39.492 ************************************ 00:23:39.492 START TEST spdk_target_abort 00:23:39.492 ************************************ 00:23:39.492 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:23:39.492 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.493 spdk_targetn1 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.493 [2024-08-11 21:04:50.246106] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:39.493 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 [2024-08-11 21:04:50.274257] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:39.752 21:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:39.752 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:43.038 Initializing NVMe Controllers 00:23:43.038 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:43.038 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:43.038 Initialization complete. Launching workers. 00:23:43.038 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10268, failed: 0 00:23:43.038 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1040, failed to submit 9228 00:23:43.038 success 761, unsuccessful 279, failed 0 00:23:43.038 21:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.038 21:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.038 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:46.326 Initializing NVMe Controllers 00:23:46.326 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.326 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.326 Initialization complete. Launching workers. 00:23:46.326 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9029, failed: 0 00:23:46.326 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1167, failed to submit 7862 00:23:46.326 success 417, unsuccessful 750, failed 0 00:23:46.326 21:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.326 21:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.326 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:49.610 Initializing NVMe Controllers 00:23:49.610 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.610 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:49.610 Initialization complete. Launching workers. 00:23:49.610 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32408, failed: 0 00:23:49.610 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2397, failed to submit 30011 00:23:49.610 success 394, unsuccessful 2003, failed 0 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@557 -- # xtrace_disable 00:23:49.610 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 96750 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 96750 ']' 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 96750 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96750 00:23:49.869 killing process with pid 96750 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96750' 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 96750 00:23:49.869 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 96750 00:23:50.128 00:23:50.128 real 0m10.524s 00:23:50.128 user 0m43.187s 00:23:50.128 sys 0m2.059s 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:50.128 ************************************ 00:23:50.128 END TEST spdk_target_abort 00:23:50.128 ************************************ 00:23:50.128 21:05:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:50.128 21:05:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:50.128 21:05:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:50.128 21:05:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:50.128 ************************************ 00:23:50.128 START TEST kernel_target_abort 00:23:50.128 ************************************ 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@761 -- # local ip 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@762 -- # ip_candidates=() 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@762 -- # local -A ip_candidates 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@764 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # [[ -z tcp ]] 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip=NVMF_INITIATOR_IP 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # [[ -z 10.0.0.1 ]] 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # echo 10.0.0.1 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@654 -- # nvmet=/sys/kernel/config/nvmet 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@655 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # local block nvme 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # [[ ! -e /sys/module/nvmet ]] 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # modprobe nvmet 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:50.128 21:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:50.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:50.388 Waiting for block devices as requested 00:23:50.657 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:50.657 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # is_block_zoned nvme0n1 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # block_in_use nvme0n1 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:50.657 No valid GPT data, bailing 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n1 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # is_block_zoned nvme0n2 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:23:50.657 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:50.658 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:50.658 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # block_in_use nvme0n2 00:23:50.658 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:50.658 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:50.917 No valid GPT data, bailing 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n2 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # is_block_zoned nvme0n3 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # block_in_use nvme0n3 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:50.917 No valid GPT data, bailing 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # nvme=/dev/nvme0n3 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # for block in /sys/block/nvme* 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # is_block_zoned nvme1n1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # block_in_use nvme1n1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:50.917 No valid GPT data, bailing 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # nvme=/dev/nvme1n1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # [[ -b /dev/nvme1n1 ]] 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # echo 1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # echo /dev/nvme1n1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo 1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 10.0.0.1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo tcp 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 4420 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo ipv4 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 --hostid=78d593be-f127-44be-9e85-a8fa7f0a66f9 -a 10.0.0.1 -t tcp -s 4420 00:23:50.917 00:23:50.917 Discovery Log Number of Records 2, Generation counter 2 00:23:50.917 =====Discovery Log Entry 0====== 00:23:50.917 trtype: tcp 00:23:50.917 adrfam: ipv4 00:23:50.917 subtype: current discovery subsystem 00:23:50.917 treq: not specified, sq flow control disable supported 00:23:50.917 portid: 1 00:23:50.917 trsvcid: 4420 00:23:50.917 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:50.917 traddr: 10.0.0.1 00:23:50.917 eflags: none 00:23:50.917 sectype: none 00:23:50.917 =====Discovery Log Entry 1====== 00:23:50.917 trtype: tcp 00:23:50.917 adrfam: ipv4 00:23:50.917 subtype: nvme subsystem 00:23:50.917 treq: not specified, sq flow control disable supported 00:23:50.917 portid: 1 00:23:50.917 trsvcid: 4420 00:23:50.917 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:50.917 traddr: 10.0.0.1 00:23:50.917 eflags: none 00:23:50.917 sectype: none 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:50.917 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:50.918 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:50.918 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:50.918 21:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:51.177 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:54.468 Initializing NVMe Controllers 00:23:54.468 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:54.468 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:54.468 Initialization complete. Launching workers. 00:23:54.468 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33002, failed: 0 00:23:54.468 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33002, failed to submit 0 00:23:54.468 success 0, unsuccessful 33002, failed 0 00:23:54.468 21:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:54.468 21:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:54.468 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:23:57.755 Initializing NVMe Controllers 00:23:57.755 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:57.755 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:57.755 Initialization complete. Launching workers. 00:23:57.755 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68793, failed: 0 00:23:57.755 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29880, failed to submit 38913 00:23:57.755 success 0, unsuccessful 29880, failed 0 00:23:57.755 21:05:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:57.755 21:05:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:57.755 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:01.041 Initializing NVMe Controllers 00:24:01.041 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:01.041 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:01.041 Initialization complete. Launching workers. 00:24:01.041 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82259, failed: 0 00:24:01.041 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20548, failed to submit 61711 00:24:01.041 success 0, unsuccessful 20548, failed 0 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # echo 0 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@709 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@711 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # modules=(/sys/module/nvmet/holders/*) 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # modprobe -r nvmet_tcp nvmet 00:24:01.041 21:05:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:01.300 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:02.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:02.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:02.239 00:24:02.239 real 0m12.209s 00:24:02.239 user 0m6.019s 00:24:02.239 sys 0m3.697s 00:24:02.239 21:05:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:02.239 21:05:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:02.239 ************************************ 00:24:02.239 END TEST kernel_target_abort 00:24:02.239 ************************************ 00:24:02.239 21:05:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:02.239 21:05:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:02.239 21:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # nvmfcleanup 00:24:02.239 21:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.498 rmmod nvme_tcp 00:24:02.498 rmmod nvme_fabrics 00:24:02.498 rmmod nvme_keyring 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # '[' -n 96750 ']' 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # killprocess 96750 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 96750 ']' 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 96750 00:24:02.498 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (96750) - No such process 00:24:02.498 Process with pid 96750 is not found 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 96750 is not found' 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # '[' iso == iso ']' 00:24:02.498 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:02.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:02.757 Waiting for block devices as requested 00:24:03.015 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:03.015 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # [[ tcp == \t\c\p ]] 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmf_tcp_fini 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # iptr 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@783 -- # iptables-save 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@783 -- # grep -v SPDK_NVMF 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@783 -- # iptables-restore 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # nvmf_veth_fini 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # ip link set nvmf_init_br nomaster 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # ip link set nvmf_init_br2 nomaster 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # ip link set nvmf_tgt_br nomaster 00:24:03.015 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br down 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 down 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br down 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 down 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link delete nvmf_br type bridge 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link delete nvmf_init_if 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link delete nvmf_init_if2 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # remove_spdk_ns 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@648 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # return 0 00:24:03.274 00:24:03.274 real 0m26.379s 00:24:03.274 user 0m50.518s 00:24:03.274 sys 0m7.176s 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:03.274 21:05:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:03.274 ************************************ 00:24:03.274 END TEST nvmf_abort_qd_sizes 00:24:03.274 ************************************ 00:24:03.274 21:05:14 -- spdk/autotest.sh@301 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:03.274 21:05:14 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:03.274 21:05:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:03.274 21:05:14 -- common/autotest_common.sh@10 -- # set +x 00:24:03.274 ************************************ 00:24:03.275 START TEST keyring_file 00:24:03.275 ************************************ 00:24:03.275 21:05:14 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:03.534 * Looking for test storage... 00:24:03.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.534 21:05:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.534 21:05:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.534 21:05:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.534 21:05:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.534 21:05:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.534 21:05:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.534 21:05:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:03.534 21:05:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:03.534 21:05:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ekYW2sBBVV 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@722 -- # local prefix key digest 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@724 -- # key=00112233445566778899aabbccddeeff 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@724 -- # digest=0 00:24:03.534 21:05:14 keyring_file -- nvmf/common.sh@725 -- # python - 00:24:03.534 21:05:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ekYW2sBBVV 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ekYW2sBBVV 00:24:03.535 21:05:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ekYW2sBBVV 00:24:03.535 21:05:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.11KBUupkIZ 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:03.535 21:05:14 keyring_file -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:03.535 21:05:14 keyring_file -- nvmf/common.sh@722 -- # local prefix key digest 00:24:03.535 21:05:14 keyring_file -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:24:03.535 21:05:14 keyring_file -- nvmf/common.sh@724 -- # key=112233445566778899aabbccddeeff00 00:24:03.535 21:05:14 keyring_file -- nvmf/common.sh@724 -- # digest=0 00:24:03.535 21:05:14 keyring_file -- nvmf/common.sh@725 -- # python - 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.11KBUupkIZ 00:24:03.535 21:05:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.11KBUupkIZ 00:24:03.535 21:05:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.11KBUupkIZ 00:24:03.535 21:05:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=97655 00:24:03.535 21:05:14 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:03.535 21:05:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 97655 00:24:03.535 21:05:14 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 97655 ']' 00:24:03.535 21:05:14 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.535 21:05:14 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.535 21:05:14 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.535 21:05:14 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.535 21:05:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:03.794 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:03.794 [2024-08-11 21:05:14.326913] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:03.794 [2024-08-11 21:05:14.327020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97655 ] 00:24:03.794 [2024-08-11 21:05:14.466763] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.794 [2024-08-11 21:05:14.562878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.052 [2024-08-11 21:05:14.620699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:04.619 21:05:15 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:04.619 21:05:15 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:04.619 21:05:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:04.619 21:05:15 keyring_file -- common/autotest_common.sh@557 -- # xtrace_disable 00:24:04.619 21:05:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:04.619 [2024-08-11 21:05:15.376802] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.878 null0 00:24:04.878 [2024-08-11 21:05:15.408743] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:04.878 [2024-08-11 21:05:15.408970] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:04.878 [2024-08-11 21:05:15.416744] tcp.c:3766:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:24:04.878 21:05:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@646 -- # local es=0 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@648 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@634 -- # local arg=rpc_cmd 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@638 -- # type -t rpc_cmd 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@649 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@557 -- # xtrace_disable 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:04.878 [2024-08-11 21:05:15.432766] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:04.878 request: 00:24:04.878 { 00:24:04.878 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:04.878 "secure_channel": false, 00:24:04.878 "listen_address": { 00:24:04.878 "trtype": "tcp", 00:24:04.878 "traddr": "127.0.0.1", 00:24:04.878 "trsvcid": "4420" 00:24:04.878 }, 00:24:04.878 "method": "nvmf_subsystem_add_listener", 00:24:04.878 "req_id": 1 00:24:04.878 } 00:24:04.878 Got JSON-RPC error response 00:24:04.878 response: 00:24:04.878 { 00:24:04.878 "code": -32602, 00:24:04.878 "message": "Invalid parameters" 00:24:04.878 } 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@585 -- # [[ 1 == 0 ]] 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@649 -- # es=1 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:04.878 21:05:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=97672 00:24:04.878 21:05:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 97672 /var/tmp/bperf.sock 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 97672 ']' 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:04.878 21:05:15 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:04.878 21:05:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:04.878 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:04.878 [2024-08-11 21:05:15.498181] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:04.878 [2024-08-11 21:05:15.498281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97672 ] 00:24:04.878 [2024-08-11 21:05:15.638440] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.137 [2024-08-11 21:05:15.735892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.137 [2024-08-11 21:05:15.794087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:06.073 21:05:16 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.073 21:05:16 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:06.073 21:05:16 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:06.073 21:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:06.073 21:05:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.11KBUupkIZ 00:24:06.073 21:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.11KBUupkIZ 00:24:06.331 21:05:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:24:06.332 21:05:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:24:06.332 21:05:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.332 21:05:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:06.332 21:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.590 21:05:17 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.ekYW2sBBVV == \/\t\m\p\/\t\m\p\.\e\k\Y\W\2\s\B\B\V\V ]] 00:24:06.590 21:05:17 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:06.590 21:05:17 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:24:06.590 21:05:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.590 21:05:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:06.590 21:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.849 21:05:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.11KBUupkIZ == \/\t\m\p\/\t\m\p\.\1\1\K\B\U\u\p\k\I\Z ]] 00:24:06.849 21:05:17 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:24:06.849 21:05:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:06.849 21:05:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:06.849 21:05:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:06.849 21:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:06.850 21:05:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.108 21:05:17 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:07.108 21:05:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:24:07.108 21:05:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:07.108 21:05:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.108 21:05:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.108 21:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.108 21:05:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:07.367 21:05:18 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:07.367 21:05:18 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.367 21:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:07.625 [2024-08-11 21:05:18.319334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.625 nvme0n1 00:24:07.884 21:05:18 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:07.884 21:05:18 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:07.884 21:05:18 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:07.884 21:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.142 21:05:18 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:08.142 21:05:18 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:08.401 Running I/O for 1 seconds... 00:24:09.340 00:24:09.340 Latency(us) 00:24:09.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.340 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:09.340 nvme0n1 : 1.01 13158.33 51.40 0.00 0.00 9696.36 3559.80 14656.23 00:24:09.340 =================================================================================================================== 00:24:09.340 Total : 13158.33 51.40 0.00 0.00 9696.36 3559.80 14656.23 00:24:09.340 0 00:24:09.340 21:05:20 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:09.340 21:05:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:09.598 21:05:20 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:24:09.598 21:05:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:09.598 21:05:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:09.598 21:05:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:09.598 21:05:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:09.598 21:05:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:09.857 21:05:20 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:09.857 21:05:20 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:24:09.857 21:05:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:09.857 21:05:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:09.857 21:05:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:09.857 21:05:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:09.857 21:05:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:10.427 21:05:20 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:10.427 21:05:20 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@646 -- # local es=0 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@648 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@634 -- # local arg=bperf_cmd 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@638 -- # type -t bperf_cmd 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:10.427 21:05:20 keyring_file -- common/autotest_common.sh@649 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.427 21:05:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:10.427 [2024-08-11 21:05:21.172992] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:10.427 [2024-08-11 21:05:21.173738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f6530 (107): Transport endpoint is not connected 00:24:10.427 [2024-08-11 21:05:21.174712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f6530 (9): Bad file descriptor 00:24:10.427 [2024-08-11 21:05:21.175709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:10.427 [2024-08-11 21:05:21.175738] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:10.427 [2024-08-11 21:05:21.175750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:10.427 request: 00:24:10.427 { 00:24:10.427 "name": "nvme0", 00:24:10.427 "trtype": "tcp", 00:24:10.427 "traddr": "127.0.0.1", 00:24:10.427 "adrfam": "ipv4", 00:24:10.427 "trsvcid": "4420", 00:24:10.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.427 "prchk_reftag": false, 00:24:10.427 "prchk_guard": false, 00:24:10.427 "hdgst": false, 00:24:10.427 "ddgst": false, 00:24:10.427 "psk": "key1", 00:24:10.427 "method": "bdev_nvme_attach_controller", 00:24:10.427 "req_id": 1 00:24:10.427 } 00:24:10.427 Got JSON-RPC error response 00:24:10.427 response: 00:24:10.427 { 00:24:10.427 "code": -5, 00:24:10.427 "message": "Input/output error" 00:24:10.427 } 00:24:10.427 21:05:21 keyring_file -- common/autotest_common.sh@649 -- # es=1 00:24:10.427 21:05:21 keyring_file -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:10.427 21:05:21 keyring_file -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:10.427 21:05:21 keyring_file -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:10.427 21:05:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:24:10.427 21:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:10.427 21:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.427 21:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.427 21:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:10.427 21:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.995 21:05:21 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:10.995 21:05:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:24:10.995 21:05:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:10.995 21:05:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:10.995 21:05:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:10.995 21:05:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:10.995 21:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:10.995 21:05:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:10.995 21:05:21 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:10.995 21:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:11.254 21:05:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:11.254 21:05:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:11.512 21:05:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:11.512 21:05:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.512 21:05:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:24:11.771 21:05:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:11.771 21:05:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.ekYW2sBBVV 00:24:11.771 21:05:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@646 -- # local es=0 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@648 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@634 -- # local arg=bperf_cmd 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@638 -- # type -t bperf_cmd 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:11.771 21:05:22 keyring_file -- common/autotest_common.sh@649 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:11.771 21:05:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:12.029 [2024-08-11 21:05:22.754092] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ekYW2sBBVV': 0100660 00:24:12.029 [2024-08-11 21:05:22.754489] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:12.029 request: 00:24:12.029 { 00:24:12.029 "name": "key0", 00:24:12.029 "path": "/tmp/tmp.ekYW2sBBVV", 00:24:12.029 "method": "keyring_file_add_key", 00:24:12.029 "req_id": 1 00:24:12.029 } 00:24:12.029 Got JSON-RPC error response 00:24:12.029 response: 00:24:12.029 { 00:24:12.029 "code": -1, 00:24:12.029 "message": "Operation not permitted" 00:24:12.029 } 00:24:12.029 21:05:22 keyring_file -- common/autotest_common.sh@649 -- # es=1 00:24:12.029 21:05:22 keyring_file -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:12.029 21:05:22 keyring_file -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:12.029 21:05:22 keyring_file -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:12.029 21:05:22 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.ekYW2sBBVV 00:24:12.029 21:05:22 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:12.029 21:05:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ekYW2sBBVV 00:24:12.288 21:05:23 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.ekYW2sBBVV 00:24:12.288 21:05:23 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:24:12.288 21:05:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:12.288 21:05:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:12.288 21:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:12.288 21:05:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:12.288 21:05:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:12.550 21:05:23 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:12.550 21:05:23 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.550 21:05:23 keyring_file -- common/autotest_common.sh@646 -- # local es=0 00:24:12.550 21:05:23 keyring_file -- common/autotest_common.sh@648 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.550 21:05:23 keyring_file -- common/autotest_common.sh@634 -- # local arg=bperf_cmd 00:24:12.550 21:05:23 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:12.550 21:05:23 keyring_file -- common/autotest_common.sh@638 -- # type -t bperf_cmd 00:24:12.813 21:05:23 keyring_file -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:12.813 21:05:23 keyring_file -- common/autotest_common.sh@649 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.813 21:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:12.813 [2024-08-11 21:05:23.522272] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ekYW2sBBVV': No such file or directory 00:24:12.813 [2024-08-11 21:05:23.522652] nvme_tcp.c:2587:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:12.813 [2024-08-11 21:05:23.522685] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:12.813 [2024-08-11 21:05:23.522695] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:12.813 [2024-08-11 21:05:23.522705] bdev_nvme.c:6285:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:12.813 request: 00:24:12.813 { 00:24:12.813 "name": "nvme0", 00:24:12.813 "trtype": "tcp", 00:24:12.813 "traddr": "127.0.0.1", 00:24:12.813 "adrfam": "ipv4", 00:24:12.813 "trsvcid": "4420", 00:24:12.813 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:12.813 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:12.813 "prchk_reftag": false, 00:24:12.813 "prchk_guard": false, 00:24:12.813 "hdgst": false, 00:24:12.813 "ddgst": false, 00:24:12.813 "psk": "key0", 00:24:12.813 "method": "bdev_nvme_attach_controller", 00:24:12.813 "req_id": 1 00:24:12.813 } 00:24:12.813 Got JSON-RPC error response 00:24:12.813 response: 00:24:12.813 { 00:24:12.813 "code": -19, 00:24:12.813 "message": "No such device" 00:24:12.813 } 00:24:12.813 21:05:23 keyring_file -- common/autotest_common.sh@649 -- # es=1 00:24:12.813 21:05:23 keyring_file -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:12.813 21:05:23 keyring_file -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:12.813 21:05:23 keyring_file -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:12.813 21:05:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:12.813 21:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:13.072 21:05:23 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZHT2SahEVc 00:24:13.072 21:05:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:13.072 21:05:23 keyring_file -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:13.072 21:05:23 keyring_file -- nvmf/common.sh@722 -- # local prefix key digest 00:24:13.072 21:05:23 keyring_file -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:24:13.072 21:05:23 keyring_file -- nvmf/common.sh@724 -- # key=00112233445566778899aabbccddeeff 00:24:13.072 21:05:23 keyring_file -- nvmf/common.sh@724 -- # digest=0 00:24:13.072 21:05:23 keyring_file -- nvmf/common.sh@725 -- # python - 00:24:13.330 21:05:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZHT2SahEVc 00:24:13.330 21:05:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZHT2SahEVc 00:24:13.330 21:05:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ZHT2SahEVc 00:24:13.330 21:05:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZHT2SahEVc 00:24:13.330 21:05:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZHT2SahEVc 00:24:13.330 21:05:24 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.330 21:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:13.896 nvme0n1 00:24:13.896 21:05:24 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:24:13.896 21:05:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:13.896 21:05:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:13.896 21:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:13.896 21:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:13.896 21:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.896 21:05:24 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:13.896 21:05:24 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:13.896 21:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:14.154 21:05:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:24:14.154 21:05:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:24:14.154 21:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.154 21:05:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.154 21:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.412 21:05:25 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:14.412 21:05:25 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:24:14.412 21:05:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:14.412 21:05:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:14.412 21:05:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:14.412 21:05:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:14.412 21:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.671 21:05:25 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:14.671 21:05:25 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:14.671 21:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:14.929 21:05:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:14.930 21:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:14.930 21:05:25 keyring_file -- keyring/file.sh@104 -- # jq length 00:24:15.188 21:05:25 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:15.188 21:05:25 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ZHT2SahEVc 00:24:15.188 21:05:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ZHT2SahEVc 00:24:15.446 21:05:26 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.11KBUupkIZ 00:24:15.446 21:05:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.11KBUupkIZ 00:24:15.705 21:05:26 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:15.705 21:05:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:15.963 nvme0n1 00:24:15.964 21:05:26 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:15.964 21:05:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:16.222 21:05:26 keyring_file -- keyring/file.sh@112 -- # config='{ 00:24:16.222 "subsystems": [ 00:24:16.222 { 00:24:16.222 "subsystem": "keyring", 00:24:16.222 "config": [ 00:24:16.222 { 00:24:16.222 "method": "keyring_file_add_key", 00:24:16.222 "params": { 00:24:16.222 "name": "key0", 00:24:16.222 "path": "/tmp/tmp.ZHT2SahEVc" 00:24:16.222 } 00:24:16.222 }, 00:24:16.222 { 00:24:16.222 "method": "keyring_file_add_key", 00:24:16.222 "params": { 00:24:16.222 "name": "key1", 00:24:16.222 "path": "/tmp/tmp.11KBUupkIZ" 00:24:16.222 } 00:24:16.222 } 00:24:16.222 ] 00:24:16.222 }, 00:24:16.222 { 00:24:16.222 "subsystem": "iobuf", 00:24:16.222 "config": [ 00:24:16.222 { 00:24:16.222 "method": "iobuf_set_options", 00:24:16.222 "params": { 00:24:16.222 "small_pool_count": 8192, 00:24:16.222 "large_pool_count": 1024, 00:24:16.222 "small_bufsize": 8192, 00:24:16.222 "large_bufsize": 135168 00:24:16.222 } 00:24:16.222 } 00:24:16.222 ] 00:24:16.222 }, 00:24:16.222 { 00:24:16.222 "subsystem": "sock", 00:24:16.223 "config": [ 00:24:16.223 { 00:24:16.223 "method": "sock_set_default_impl", 00:24:16.223 "params": { 00:24:16.223 "impl_name": "uring" 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "sock_impl_set_options", 00:24:16.223 "params": { 00:24:16.223 "impl_name": "ssl", 00:24:16.223 "recv_buf_size": 4096, 00:24:16.223 "send_buf_size": 4096, 00:24:16.223 "enable_recv_pipe": true, 00:24:16.223 "enable_quickack": false, 00:24:16.223 "enable_placement_id": 0, 00:24:16.223 "enable_zerocopy_send_server": true, 00:24:16.223 "enable_zerocopy_send_client": false, 00:24:16.223 "zerocopy_threshold": 0, 00:24:16.223 "tls_version": 0, 00:24:16.223 "enable_ktls": false 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "sock_impl_set_options", 00:24:16.223 "params": { 00:24:16.223 "impl_name": "posix", 00:24:16.223 "recv_buf_size": 2097152, 00:24:16.223 "send_buf_size": 2097152, 00:24:16.223 "enable_recv_pipe": true, 00:24:16.223 "enable_quickack": false, 00:24:16.223 "enable_placement_id": 0, 00:24:16.223 "enable_zerocopy_send_server": true, 00:24:16.223 "enable_zerocopy_send_client": false, 00:24:16.223 "zerocopy_threshold": 0, 00:24:16.223 "tls_version": 0, 00:24:16.223 "enable_ktls": false 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "sock_impl_set_options", 00:24:16.223 "params": { 00:24:16.223 "impl_name": "uring", 00:24:16.223 "recv_buf_size": 2097152, 00:24:16.223 "send_buf_size": 2097152, 00:24:16.223 "enable_recv_pipe": true, 00:24:16.223 "enable_quickack": false, 00:24:16.223 "enable_placement_id": 0, 00:24:16.223 "enable_zerocopy_send_server": false, 00:24:16.223 "enable_zerocopy_send_client": false, 00:24:16.223 "zerocopy_threshold": 0, 00:24:16.223 "tls_version": 0, 00:24:16.223 "enable_ktls": false 00:24:16.223 } 00:24:16.223 } 00:24:16.223 ] 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "subsystem": "vmd", 00:24:16.223 "config": [] 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "subsystem": "accel", 00:24:16.223 "config": [ 00:24:16.223 { 00:24:16.223 "method": "accel_set_options", 00:24:16.223 "params": { 00:24:16.223 "small_cache_size": 128, 00:24:16.223 "large_cache_size": 16, 00:24:16.223 "task_count": 2048, 00:24:16.223 "sequence_count": 2048, 00:24:16.223 "buf_count": 2048 00:24:16.223 } 00:24:16.223 } 00:24:16.223 ] 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "subsystem": "bdev", 00:24:16.223 "config": [ 00:24:16.223 { 00:24:16.223 "method": "bdev_set_options", 00:24:16.223 "params": { 00:24:16.223 "bdev_io_pool_size": 65535, 00:24:16.223 "bdev_io_cache_size": 256, 00:24:16.223 "bdev_auto_examine": true, 00:24:16.223 "iobuf_small_cache_size": 128, 00:24:16.223 "iobuf_large_cache_size": 16 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "bdev_raid_set_options", 00:24:16.223 "params": { 00:24:16.223 "process_window_size_kb": 1024, 00:24:16.223 "process_max_bandwidth_mb_sec": 0 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "bdev_iscsi_set_options", 00:24:16.223 "params": { 00:24:16.223 "timeout_sec": 30 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "bdev_nvme_set_options", 00:24:16.223 "params": { 00:24:16.223 "action_on_timeout": "none", 00:24:16.223 "timeout_us": 0, 00:24:16.223 "timeout_admin_us": 0, 00:24:16.223 "keep_alive_timeout_ms": 10000, 00:24:16.223 "arbitration_burst": 0, 00:24:16.223 "low_priority_weight": 0, 00:24:16.223 "medium_priority_weight": 0, 00:24:16.223 "high_priority_weight": 0, 00:24:16.223 "nvme_adminq_poll_period_us": 10000, 00:24:16.223 "nvme_ioq_poll_period_us": 0, 00:24:16.223 "io_queue_requests": 512, 00:24:16.223 "delay_cmd_submit": true, 00:24:16.223 "transport_retry_count": 4, 00:24:16.223 "bdev_retry_count": 3, 00:24:16.223 "transport_ack_timeout": 0, 00:24:16.223 "ctrlr_loss_timeout_sec": 0, 00:24:16.223 "reconnect_delay_sec": 0, 00:24:16.223 "fast_io_fail_timeout_sec": 0, 00:24:16.223 "disable_auto_failback": false, 00:24:16.223 "generate_uuids": false, 00:24:16.223 "transport_tos": 0, 00:24:16.223 "nvme_error_stat": false, 00:24:16.223 "rdma_srq_size": 0, 00:24:16.223 "io_path_stat": false, 00:24:16.223 "allow_accel_sequence": false, 00:24:16.223 "rdma_max_cq_size": 0, 00:24:16.223 "rdma_cm_event_timeout_ms": 0, 00:24:16.223 "dhchap_digests": [ 00:24:16.223 "sha256", 00:24:16.223 "sha384", 00:24:16.223 "sha512" 00:24:16.223 ], 00:24:16.223 "dhchap_dhgroups": [ 00:24:16.223 "null", 00:24:16.223 "ffdhe2048", 00:24:16.223 "ffdhe3072", 00:24:16.223 "ffdhe4096", 00:24:16.223 "ffdhe6144", 00:24:16.223 "ffdhe8192" 00:24:16.223 ] 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "bdev_nvme_attach_controller", 00:24:16.223 "params": { 00:24:16.223 "name": "nvme0", 00:24:16.223 "trtype": "TCP", 00:24:16.223 "adrfam": "IPv4", 00:24:16.223 "traddr": "127.0.0.1", 00:24:16.223 "trsvcid": "4420", 00:24:16.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.223 "prchk_reftag": false, 00:24:16.223 "prchk_guard": false, 00:24:16.223 "ctrlr_loss_timeout_sec": 0, 00:24:16.223 "reconnect_delay_sec": 0, 00:24:16.223 "fast_io_fail_timeout_sec": 0, 00:24:16.223 "psk": "key0", 00:24:16.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.223 "hdgst": false, 00:24:16.223 "ddgst": false 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "bdev_nvme_set_hotplug", 00:24:16.223 "params": { 00:24:16.223 "period_us": 100000, 00:24:16.223 "enable": false 00:24:16.223 } 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "method": "bdev_wait_for_examine" 00:24:16.223 } 00:24:16.223 ] 00:24:16.223 }, 00:24:16.223 { 00:24:16.223 "subsystem": "nbd", 00:24:16.223 "config": [] 00:24:16.223 } 00:24:16.223 ] 00:24:16.223 }' 00:24:16.223 21:05:26 keyring_file -- keyring/file.sh@114 -- # killprocess 97672 00:24:16.223 21:05:26 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 97672 ']' 00:24:16.223 21:05:26 keyring_file -- common/autotest_common.sh@950 -- # kill -0 97672 00:24:16.223 21:05:26 keyring_file -- common/autotest_common.sh@951 -- # uname 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97672 00:24:16.482 killing process with pid 97672 00:24:16.482 Received shutdown signal, test time was about 1.000000 seconds 00:24:16.482 00:24:16.482 Latency(us) 00:24:16.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.482 =================================================================================================================== 00:24:16.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97672' 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@965 -- # kill 97672 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@970 -- # wait 97672 00:24:16.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.482 21:05:27 keyring_file -- keyring/file.sh@117 -- # bperfpid=97924 00:24:16.482 21:05:27 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:16.482 21:05:27 keyring_file -- keyring/file.sh@119 -- # waitforlisten 97924 /var/tmp/bperf.sock 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 97924 ']' 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.482 21:05:27 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:16.482 21:05:27 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:24:16.482 "subsystems": [ 00:24:16.482 { 00:24:16.482 "subsystem": "keyring", 00:24:16.482 "config": [ 00:24:16.482 { 00:24:16.483 "method": "keyring_file_add_key", 00:24:16.483 "params": { 00:24:16.483 "name": "key0", 00:24:16.483 "path": "/tmp/tmp.ZHT2SahEVc" 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "keyring_file_add_key", 00:24:16.483 "params": { 00:24:16.483 "name": "key1", 00:24:16.483 "path": "/tmp/tmp.11KBUupkIZ" 00:24:16.483 } 00:24:16.483 } 00:24:16.483 ] 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "subsystem": "iobuf", 00:24:16.483 "config": [ 00:24:16.483 { 00:24:16.483 "method": "iobuf_set_options", 00:24:16.483 "params": { 00:24:16.483 "small_pool_count": 8192, 00:24:16.483 "large_pool_count": 1024, 00:24:16.483 "small_bufsize": 8192, 00:24:16.483 "large_bufsize": 135168 00:24:16.483 } 00:24:16.483 } 00:24:16.483 ] 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "subsystem": "sock", 00:24:16.483 "config": [ 00:24:16.483 { 00:24:16.483 "method": "sock_set_default_impl", 00:24:16.483 "params": { 00:24:16.483 "impl_name": "uring" 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "sock_impl_set_options", 00:24:16.483 "params": { 00:24:16.483 "impl_name": "ssl", 00:24:16.483 "recv_buf_size": 4096, 00:24:16.483 "send_buf_size": 4096, 00:24:16.483 "enable_recv_pipe": true, 00:24:16.483 "enable_quickack": false, 00:24:16.483 "enable_placement_id": 0, 00:24:16.483 "enable_zerocopy_send_server": true, 00:24:16.483 "enable_zerocopy_send_client": false, 00:24:16.483 "zerocopy_threshold": 0, 00:24:16.483 "tls_version": 0, 00:24:16.483 "enable_ktls": false 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "sock_impl_set_options", 00:24:16.483 "params": { 00:24:16.483 "impl_name": "posix", 00:24:16.483 "recv_buf_size": 2097152, 00:24:16.483 "send_buf_size": 2097152, 00:24:16.483 "enable_recv_pipe": true, 00:24:16.483 "enable_quickack": false, 00:24:16.483 "enable_placement_id": 0, 00:24:16.483 "enable_zerocopy_send_server": true, 00:24:16.483 "enable_zerocopy_send_client": false, 00:24:16.483 "zerocopy_threshold": 0, 00:24:16.483 "tls_version": 0, 00:24:16.483 "enable_ktls": false 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "sock_impl_set_options", 00:24:16.483 "params": { 00:24:16.483 "impl_name": "uring", 00:24:16.483 "recv_buf_size": 2097152, 00:24:16.483 "send_buf_size": 2097152, 00:24:16.483 "enable_recv_pipe": true, 00:24:16.483 "enable_quickack": false, 00:24:16.483 "enable_placement_id": 0, 00:24:16.483 "enable_zerocopy_send_server": false, 00:24:16.483 "enable_zerocopy_send_client": false, 00:24:16.483 "zerocopy_threshold": 0, 00:24:16.483 "tls_version": 0, 00:24:16.483 "enable_ktls": false 00:24:16.483 } 00:24:16.483 } 00:24:16.483 ] 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "subsystem": "vmd", 00:24:16.483 "config": [] 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "subsystem": "accel", 00:24:16.483 "config": [ 00:24:16.483 { 00:24:16.483 "method": "accel_set_options", 00:24:16.483 "params": { 00:24:16.483 "small_cache_size": 128, 00:24:16.483 "large_cache_size": 16, 00:24:16.483 "task_count": 2048, 00:24:16.483 "sequence_count": 2048, 00:24:16.483 "buf_count": 2048 00:24:16.483 } 00:24:16.483 } 00:24:16.483 ] 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "subsystem": "bdev", 00:24:16.483 "config": [ 00:24:16.483 { 00:24:16.483 "method": "bdev_set_options", 00:24:16.483 "params": { 00:24:16.483 "bdev_io_pool_size": 65535, 00:24:16.483 "bdev_io_cache_size": 256, 00:24:16.483 "bdev_auto_examine": true, 00:24:16.483 "iobuf_small_cache_size": 128, 00:24:16.483 "iobuf_large_cache_size": 16 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "bdev_raid_set_options", 00:24:16.483 "params": { 00:24:16.483 "process_window_size_kb": 1024, 00:24:16.483 "process_max_bandwidth_mb_sec": 0 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "bdev_iscsi_set_options", 00:24:16.483 "params": { 00:24:16.483 "timeout_sec": 30 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "bdev_nvme_set_options", 00:24:16.483 "params": { 00:24:16.483 "action_on_timeout": "none", 00:24:16.483 "timeout_us": 0, 00:24:16.483 "timeout_admin_us": 0, 00:24:16.483 "keep_alive_timeout_ms": 10000, 00:24:16.483 "arbitration_burst": 0, 00:24:16.483 "low_priority_weight": 0, 00:24:16.483 "medium_priority_weight": 0, 00:24:16.483 "high_priority_weight": 0, 00:24:16.483 "nvme_adminq_poll_period_us": 10000, 00:24:16.483 "nvme_ioq_poll_period_us": 0, 00:24:16.483 "io_queue_requests": 512, 00:24:16.483 "delay_cmd_submit": true, 00:24:16.483 "transport_retry_count": 4, 00:24:16.483 "bdev_retry_count": 3, 00:24:16.483 "transport_ack_timeout": 0, 00:24:16.483 "ctrlr_loss_timeout_sec": 0, 00:24:16.483 "reconnect_delay_sec": 0, 00:24:16.483 "fast_io_fail_timeout_sec": 0, 00:24:16.483 "disable_auto_failback": false, 00:24:16.483 "generate_uuids": false, 00:24:16.483 "transport_tos": 0, 00:24:16.483 "nvme_error_stat": false, 00:24:16.483 "rdma_srq_size": 0, 00:24:16.483 "io_path_stat": false, 00:24:16.483 "allow_accel_sequence": false, 00:24:16.483 "rdma_max_cq_size": 0, 00:24:16.483 "rdma_cm_event_timeout_ms": 0, 00:24:16.483 "dhchap_digests": [ 00:24:16.483 "sha256", 00:24:16.483 "sha384", 00:24:16.483 "sha512" 00:24:16.483 ], 00:24:16.483 "dhchap_dhgroups": [ 00:24:16.483 "null", 00:24:16.483 "ffdhe2048", 00:24:16.483 "ffdhe3072", 00:24:16.483 "ffdhe4096", 00:24:16.483 "ffdhe6144", 00:24:16.483 "ffdhe8192" 00:24:16.483 ] 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "bdev_nvme_attach_controller", 00:24:16.483 "params": { 00:24:16.483 "name": "nvme0", 00:24:16.483 "trtype": "TCP", 00:24:16.483 "adrfam": "IPv4", 00:24:16.483 "traddr": "127.0.0.1", 00:24:16.483 "trsvcid": "4420", 00:24:16.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.483 "prchk_reftag": false, 00:24:16.483 "prchk_guard": false, 00:24:16.483 "ctrlr_loss_timeout_sec": 0, 00:24:16.483 "reconnect_delay_sec": 0, 00:24:16.483 "fast_io_fail_timeout_sec": 0, 00:24:16.483 "psk": "key0", 00:24:16.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.483 "hdgst": false, 00:24:16.483 "ddgst": false 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "bdev_nvme_set_hotplug", 00:24:16.483 "params": { 00:24:16.483 "period_us": 100000, 00:24:16.483 "enable": false 00:24:16.483 } 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "method": "bdev_wait_for_examine" 00:24:16.483 } 00:24:16.483 ] 00:24:16.483 }, 00:24:16.483 { 00:24:16.483 "subsystem": "nbd", 00:24:16.483 "config": [] 00:24:16.483 } 00:24:16.483 ] 00:24:16.483 }' 00:24:16.483 21:05:27 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.483 21:05:27 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:16.483 21:05:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:16.742 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:16.742 [2024-08-11 21:05:27.270668] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:16.742 [2024-08-11 21:05:27.270782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97924 ] 00:24:16.742 [2024-08-11 21:05:27.399275] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.742 [2024-08-11 21:05:27.471976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.000 [2024-08-11 21:05:27.604667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:17.000 [2024-08-11 21:05:27.654424] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.567 21:05:28 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:17.567 21:05:28 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:24:17.567 21:05:28 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:17.567 21:05:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.567 21:05:28 keyring_file -- keyring/file.sh@120 -- # jq length 00:24:17.825 21:05:28 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:17.825 21:05:28 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:24:17.825 21:05:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:17.825 21:05:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:17.825 21:05:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:17.825 21:05:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:17.825 21:05:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:18.084 21:05:28 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:18.084 21:05:28 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:24:18.084 21:05:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:18.084 21:05:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:18.084 21:05:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:18.084 21:05:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:18.084 21:05:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:18.342 21:05:29 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:18.342 21:05:29 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:18.342 21:05:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:18.342 21:05:29 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:18.601 21:05:29 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:18.601 21:05:29 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:18.601 21:05:29 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ZHT2SahEVc /tmp/tmp.11KBUupkIZ 00:24:18.601 21:05:29 keyring_file -- keyring/file.sh@20 -- # killprocess 97924 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 97924 ']' 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@950 -- # kill -0 97924 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@951 -- # uname 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97924 00:24:18.601 killing process with pid 97924 00:24:18.601 Received shutdown signal, test time was about 1.000000 seconds 00:24:18.601 00:24:18.601 Latency(us) 00:24:18.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.601 =================================================================================================================== 00:24:18.601 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97924' 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@965 -- # kill 97924 00:24:18.601 21:05:29 keyring_file -- common/autotest_common.sh@970 -- # wait 97924 00:24:18.860 21:05:29 keyring_file -- keyring/file.sh@21 -- # killprocess 97655 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 97655 ']' 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@950 -- # kill -0 97655 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@951 -- # uname 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97655 00:24:18.860 killing process with pid 97655 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97655' 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@965 -- # kill 97655 00:24:18.860 [2024-08-11 21:05:29.536172] app.c:1025:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:18.860 21:05:29 keyring_file -- common/autotest_common.sh@970 -- # wait 97655 00:24:19.118 ************************************ 00:24:19.118 END TEST keyring_file 00:24:19.118 ************************************ 00:24:19.118 00:24:19.118 real 0m15.850s 00:24:19.118 user 0m39.696s 00:24:19.118 sys 0m2.909s 00:24:19.118 21:05:29 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:19.118 21:05:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:19.378 21:05:29 -- spdk/autotest.sh@302 -- # [[ y == y ]] 00:24:19.378 21:05:29 -- spdk/autotest.sh@303 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:19.378 21:05:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:19.378 21:05:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:19.378 21:05:29 -- common/autotest_common.sh@10 -- # set +x 00:24:19.378 ************************************ 00:24:19.378 START TEST keyring_linux 00:24:19.378 ************************************ 00:24:19.378 21:05:29 keyring_linux -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:19.378 * Looking for test storage... 00:24:19.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78d593be-f127-44be-9e85-a8fa7f0a66f9 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=78d593be-f127-44be-9e85-a8fa7f0a66f9 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:19.378 21:05:30 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.378 21:05:30 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.378 21:05:30 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.378 21:05:30 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 21:05:30 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 21:05:30 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 21:05:30 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:19.378 21:05:30 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@722 -- # local prefix key digest 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@724 -- # key=00112233445566778899aabbccddeeff 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@724 -- # digest=0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@725 -- # python - 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:19.378 /tmp/:spdk-test:key0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@735 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@722 -- # local prefix key digest 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@724 -- # prefix=NVMeTLSkey-1 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@724 -- # key=112233445566778899aabbccddeeff00 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@724 -- # digest=0 00:24:19.378 21:05:30 keyring_linux -- nvmf/common.sh@725 -- # python - 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:19.378 /tmp/:spdk-test:key1 00:24:19.378 21:05:30 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=98031 00:24:19.378 21:05:30 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 98031 00:24:19.378 21:05:30 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 98031 ']' 00:24:19.378 21:05:30 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.378 21:05:30 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:19.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.378 21:05:30 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.378 21:05:30 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:19.378 21:05:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:19.637 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:19.637 [2024-08-11 21:05:30.171939] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:19.637 [2024-08-11 21:05:30.172030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98031 ] 00:24:19.637 [2024-08-11 21:05:30.299855] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.637 [2024-08-11 21:05:30.376675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.896 [2024-08-11 21:05:30.429187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:24:20.463 21:05:31 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@557 -- # xtrace_disable 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:20.463 [2024-08-11 21:05:31.157516] tcp.c: 729:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.463 null0 00:24:20.463 [2024-08-11 21:05:31.189492] tcp.c:1008:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:20.463 [2024-08-11 21:05:31.189723] tcp.c:1058:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@585 -- # [[ 0 == 0 ]] 00:24:20.463 21:05:31 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:20.463 588302864 00:24:20.463 21:05:31 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:20.463 102525806 00:24:20.463 21:05:31 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=98049 00:24:20.463 21:05:31 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:20.463 21:05:31 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 98049 /var/tmp/bperf.sock 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 98049 ']' 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:20.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:20.463 21:05:31 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:20.722 Invalid opts->opts_size 0 too small, please set opts_size correctly 00:24:20.722 [2024-08-11 21:05:31.275096] Starting SPDK v24.09-pre git sha1 227b8322c / DPDK 22.11.4 initialization... 00:24:20.722 [2024-08-11 21:05:31.275209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98049 ] 00:24:20.722 [2024-08-11 21:05:31.411010] app.c: 910:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.722 [2024-08-11 21:05:31.493271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.658 21:05:32 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:21.658 21:05:32 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:24:21.658 21:05:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:21.658 21:05:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:21.917 21:05:32 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:21.917 21:05:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:22.193 [2024-08-11 21:05:32.797224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:22.193 21:05:32 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:22.193 21:05:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:22.472 [2024-08-11 21:05:33.059062] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.472 nvme0n1 00:24:22.472 21:05:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:22.472 21:05:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:22.472 21:05:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:22.472 21:05:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:22.472 21:05:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:22.472 21:05:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.731 21:05:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:22.731 21:05:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:22.731 21:05:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:22.731 21:05:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:22.731 21:05:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:22.731 21:05:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:22.731 21:05:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@25 -- # sn=588302864 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 588302864 == \5\8\8\3\0\2\8\6\4 ]] 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 588302864 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:22.989 21:05:33 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:23.248 Running I/O for 1 seconds... 00:24:24.184 00:24:24.184 Latency(us) 00:24:24.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.184 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:24.184 nvme0n1 : 1.01 14803.07 57.82 0.00 0.00 8602.18 2889.54 11439.01 00:24:24.184 =================================================================================================================== 00:24:24.184 Total : 14803.07 57.82 0.00 0.00 8602.18 2889.54 11439.01 00:24:24.184 0 00:24:24.184 21:05:34 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:24.184 21:05:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:24.442 21:05:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:24.442 21:05:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:24.442 21:05:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:24.442 21:05:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:24.442 21:05:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:24.442 21:05:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:24.701 21:05:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:24.701 21:05:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:24.701 21:05:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:24.701 21:05:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@646 -- # local es=0 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@648 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@634 -- # local arg=bperf_cmd 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@638 -- # type -t bperf_cmd 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@638 -- # case "$(type -t "$arg")" in 00:24:24.701 21:05:35 keyring_linux -- common/autotest_common.sh@649 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:24.701 21:05:35 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:24.960 [2024-08-11 21:05:35.674418] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:24.960 [2024-08-11 21:05:35.675083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc390 (107): Transport endpoint is not connected 00:24:24.960 [2024-08-11 21:05:35.676056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dc390 (9): Bad file descriptor 00:24:24.960 [2024-08-11 21:05:35.677053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.960 [2024-08-11 21:05:35.677074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:24.960 [2024-08-11 21:05:35.677112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.960 request: 00:24:24.960 { 00:24:24.960 "name": "nvme0", 00:24:24.960 "trtype": "tcp", 00:24:24.960 "traddr": "127.0.0.1", 00:24:24.960 "adrfam": "ipv4", 00:24:24.961 "trsvcid": "4420", 00:24:24.961 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:24.961 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:24.961 "prchk_reftag": false, 00:24:24.961 "prchk_guard": false, 00:24:24.961 "hdgst": false, 00:24:24.961 "ddgst": false, 00:24:24.961 "psk": ":spdk-test:key1", 00:24:24.961 "method": "bdev_nvme_attach_controller", 00:24:24.961 "req_id": 1 00:24:24.961 } 00:24:24.961 Got JSON-RPC error response 00:24:24.961 response: 00:24:24.961 { 00:24:24.961 "code": -5, 00:24:24.961 "message": "Input/output error" 00:24:24.961 } 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@649 -- # es=1 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@657 -- # (( es > 128 )) 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@668 -- # [[ -n '' ]] 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@673 -- # (( !es == 0 )) 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@33 -- # sn=588302864 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 588302864 00:24:24.961 1 links removed 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@33 -- # sn=102525806 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 102525806 00:24:24.961 1 links removed 00:24:24.961 21:05:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 98049 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 98049 ']' 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 98049 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:24.961 21:05:35 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98049 00:24:25.220 killing process with pid 98049 00:24:25.220 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.220 00:24:25.220 Latency(us) 00:24:25.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.220 =================================================================================================================== 00:24:25.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98049' 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@965 -- # kill 98049 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@970 -- # wait 98049 00:24:25.220 21:05:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 98031 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 98031 ']' 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 98031 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98031 00:24:25.220 killing process with pid 98031 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98031' 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@965 -- # kill 98031 00:24:25.220 21:05:35 keyring_linux -- common/autotest_common.sh@970 -- # wait 98031 00:24:25.787 ************************************ 00:24:25.787 END TEST keyring_linux 00:24:25.787 ************************************ 00:24:25.787 00:24:25.787 real 0m6.381s 00:24:25.787 user 0m12.665s 00:24:25.787 sys 0m1.501s 00:24:25.787 21:05:36 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:25.787 21:05:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:25.788 21:05:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@323 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@327 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@332 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@358 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@363 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@367 -- # '[' 0 -eq 1 ']' 00:24:25.788 21:05:36 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:25.788 21:05:36 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:24:25.788 21:05:36 -- spdk/autotest.sh@382 -- # [[ 0 -eq 1 ]] 00:24:25.788 21:05:36 -- spdk/autotest.sh@386 -- # [[ '' -eq 1 ]] 00:24:25.788 21:05:36 -- spdk/autotest.sh@391 -- # trap - SIGINT SIGTERM EXIT 00:24:25.788 21:05:36 -- spdk/autotest.sh@393 -- # timing_enter post_cleanup 00:24:25.788 21:05:36 -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:25.788 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:24:25.788 21:05:36 -- spdk/autotest.sh@394 -- # autotest_cleanup 00:24:25.788 21:05:36 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:24:25.788 21:05:36 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:24:25.788 21:05:36 -- common/autotest_common.sh@10 -- # set +x 00:24:27.692 INFO: APP EXITING 00:24:27.692 INFO: killing all VMs 00:24:27.692 INFO: killing vhost app 00:24:27.692 INFO: EXIT DONE 00:24:27.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.209 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:28.209 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:28.777 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.777 Cleaning 00:24:28.777 Removing: /var/run/dpdk/spdk0/config 00:24:28.777 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:28.777 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:28.777 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:28.777 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:28.777 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:28.777 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:28.777 Removing: /var/run/dpdk/spdk1/config 00:24:28.777 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:28.777 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:28.777 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:28.777 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:28.777 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:28.777 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:28.777 Removing: /var/run/dpdk/spdk2/config 00:24:28.777 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:28.777 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:28.777 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:28.777 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:28.777 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:28.777 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:28.777 Removing: /var/run/dpdk/spdk3/config 00:24:29.036 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:29.036 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:29.036 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:29.036 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:29.036 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:29.036 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:29.036 Removing: /var/run/dpdk/spdk4/config 00:24:29.036 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:29.036 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:29.036 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:29.036 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:29.036 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:29.036 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:29.036 Removing: /dev/shm/nvmf_trace.0 00:24:29.036 Removing: /dev/shm/spdk_tgt_trace.pid67606 00:24:29.036 Removing: /var/run/dpdk/spdk0 00:24:29.036 Removing: /var/run/dpdk/spdk1 00:24:29.036 Removing: /var/run/dpdk/spdk2 00:24:29.036 Removing: /var/run/dpdk/spdk3 00:24:29.036 Removing: /var/run/dpdk/spdk4 00:24:29.036 Removing: /var/run/dpdk/spdk_pid67461 00:24:29.036 Removing: /var/run/dpdk/spdk_pid67606 00:24:29.036 Removing: /var/run/dpdk/spdk_pid67791 00:24:29.036 Removing: /var/run/dpdk/spdk_pid67878 00:24:29.036 Removing: /var/run/dpdk/spdk_pid67905 00:24:29.036 Removing: /var/run/dpdk/spdk_pid68009 00:24:29.036 Removing: /var/run/dpdk/spdk_pid68024 00:24:29.036 Removing: /var/run/dpdk/spdk_pid68143 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68339 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68479 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68544 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68619 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68698 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68762 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68800 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68836 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68892 00:24:29.037 Removing: /var/run/dpdk/spdk_pid68988 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69420 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69472 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69518 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69526 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69593 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69609 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69676 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69690 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69736 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69746 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69792 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69810 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69932 00:24:29.037 Removing: /var/run/dpdk/spdk_pid69962 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70037 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70339 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70352 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70388 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70401 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70417 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70443 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70457 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70478 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70497 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70516 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70526 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70545 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70564 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70579 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70600 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70614 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70635 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70654 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70663 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70683 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70719 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70727 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70762 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70826 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70849 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70866 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70889 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70904 00:24:29.037 Removing: /var/run/dpdk/spdk_pid70906 00:24:29.295 Removing: /var/run/dpdk/spdk_pid70954 00:24:29.295 Removing: /var/run/dpdk/spdk_pid70973 00:24:29.295 Removing: /var/run/dpdk/spdk_pid70996 00:24:29.295 Removing: /var/run/dpdk/spdk_pid71011 00:24:29.295 Removing: /var/run/dpdk/spdk_pid71015 00:24:29.295 Removing: /var/run/dpdk/spdk_pid71030 00:24:29.295 Removing: /var/run/dpdk/spdk_pid71034 00:24:29.295 Removing: /var/run/dpdk/spdk_pid71049 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71053 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71068 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71092 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71124 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71134 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71162 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71177 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71180 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71225 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71237 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71263 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71275 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71278 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71291 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71299 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71306 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71314 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71321 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71390 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71437 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71542 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71581 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71626 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71640 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71657 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71677 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71714 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71724 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71794 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71814 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71854 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71916 00:24:29.296 Removing: /var/run/dpdk/spdk_pid71972 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72007 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72093 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72136 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72168 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72392 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72483 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72507 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72537 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72570 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72609 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72643 00:24:29.296 Removing: /var/run/dpdk/spdk_pid72674 00:24:29.296 Removing: /var/run/dpdk/spdk_pid73042 00:24:29.296 Removing: /var/run/dpdk/spdk_pid73082 00:24:29.296 Removing: /var/run/dpdk/spdk_pid73408 00:24:29.296 Removing: /var/run/dpdk/spdk_pid73870 00:24:29.296 Removing: /var/run/dpdk/spdk_pid74140 00:24:29.296 Removing: /var/run/dpdk/spdk_pid75005 00:24:29.296 Removing: /var/run/dpdk/spdk_pid75892 00:24:29.296 Removing: /var/run/dpdk/spdk_pid76009 00:24:29.296 Removing: /var/run/dpdk/spdk_pid76077 00:24:29.296 Removing: /var/run/dpdk/spdk_pid77470 00:24:29.296 Removing: /var/run/dpdk/spdk_pid77762 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81066 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81398 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81514 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81653 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81668 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81682 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81702 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81774 00:24:29.296 Removing: /var/run/dpdk/spdk_pid81908 00:24:29.296 Removing: /var/run/dpdk/spdk_pid82045 00:24:29.296 Removing: /var/run/dpdk/spdk_pid82126 00:24:29.296 Removing: /var/run/dpdk/spdk_pid82314 00:24:29.296 Removing: /var/run/dpdk/spdk_pid82398 00:24:29.296 Removing: /var/run/dpdk/spdk_pid82490 00:24:29.296 Removing: /var/run/dpdk/spdk_pid82835 00:24:29.296 Removing: /var/run/dpdk/spdk_pid83230 00:24:29.296 Removing: /var/run/dpdk/spdk_pid83231 00:24:29.296 Removing: /var/run/dpdk/spdk_pid83232 00:24:29.296 Removing: /var/run/dpdk/spdk_pid83479 00:24:29.296 Removing: /var/run/dpdk/spdk_pid83719 00:24:29.555 Removing: /var/run/dpdk/spdk_pid83721 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86062 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86064 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86381 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86395 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86409 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86440 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86445 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86531 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86539 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86648 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86650 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86758 00:24:29.555 Removing: /var/run/dpdk/spdk_pid86760 00:24:29.555 Removing: /var/run/dpdk/spdk_pid87204 00:24:29.555 Removing: /var/run/dpdk/spdk_pid87250 00:24:29.555 Removing: /var/run/dpdk/spdk_pid87358 00:24:29.555 Removing: /var/run/dpdk/spdk_pid87442 00:24:29.555 Removing: /var/run/dpdk/spdk_pid87774 00:24:29.555 Removing: /var/run/dpdk/spdk_pid87963 00:24:29.555 Removing: /var/run/dpdk/spdk_pid88392 00:24:29.555 Removing: /var/run/dpdk/spdk_pid88949 00:24:29.555 Removing: /var/run/dpdk/spdk_pid89824 00:24:29.555 Removing: /var/run/dpdk/spdk_pid90447 00:24:29.555 Removing: /var/run/dpdk/spdk_pid90449 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92434 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92487 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92534 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92588 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92701 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92749 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92802 00:24:29.555 Removing: /var/run/dpdk/spdk_pid92855 00:24:29.555 Removing: /var/run/dpdk/spdk_pid93209 00:24:29.555 Removing: /var/run/dpdk/spdk_pid94426 00:24:29.555 Removing: /var/run/dpdk/spdk_pid94570 00:24:29.555 Removing: /var/run/dpdk/spdk_pid94815 00:24:29.555 Removing: /var/run/dpdk/spdk_pid95412 00:24:29.555 Removing: /var/run/dpdk/spdk_pid95572 00:24:29.555 Removing: /var/run/dpdk/spdk_pid95730 00:24:29.555 Removing: /var/run/dpdk/spdk_pid95826 00:24:29.555 Removing: /var/run/dpdk/spdk_pid95983 00:24:29.555 Removing: /var/run/dpdk/spdk_pid96093 00:24:29.555 Removing: /var/run/dpdk/spdk_pid96801 00:24:29.555 Removing: /var/run/dpdk/spdk_pid96841 00:24:29.555 Removing: /var/run/dpdk/spdk_pid96872 00:24:29.555 Removing: /var/run/dpdk/spdk_pid97127 00:24:29.555 Removing: /var/run/dpdk/spdk_pid97161 00:24:29.555 Removing: /var/run/dpdk/spdk_pid97192 00:24:29.555 Removing: /var/run/dpdk/spdk_pid97655 00:24:29.555 Removing: /var/run/dpdk/spdk_pid97672 00:24:29.555 Removing: /var/run/dpdk/spdk_pid97924 00:24:29.555 Removing: /var/run/dpdk/spdk_pid98031 00:24:29.555 Removing: /var/run/dpdk/spdk_pid98049 00:24:29.555 Clean 00:24:29.555 21:05:40 -- common/autotest_common.sh@1447 -- # return 0 00:24:29.555 21:05:40 -- spdk/autotest.sh@395 -- # timing_exit post_cleanup 00:24:29.555 21:05:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.555 21:05:40 -- common/autotest_common.sh@10 -- # set +x 00:24:29.814 21:05:40 -- spdk/autotest.sh@397 -- # timing_exit autotest 00:24:29.814 21:05:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.814 21:05:40 -- common/autotest_common.sh@10 -- # set +x 00:24:29.814 21:05:40 -- spdk/autotest.sh@398 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:29.814 21:05:40 -- spdk/autotest.sh@400 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:29.814 21:05:40 -- spdk/autotest.sh@400 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:29.814 21:05:40 -- spdk/autotest.sh@402 -- # hash lcov 00:24:29.814 21:05:40 -- spdk/autotest.sh@402 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:24:29.814 21:05:40 -- spdk/autotest.sh@404 -- # hostname 00:24:29.814 21:05:40 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:30.073 geninfo: WARNING: invalid characters removed from testname! 00:24:56.637 21:06:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:56.637 21:06:06 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:58.538 21:06:08 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:01.071 21:06:11 -- spdk/autotest.sh@408 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:03.037 21:06:13 -- spdk/autotest.sh@409 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:05.569 21:06:16 -- spdk/autotest.sh@410 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:08.103 21:06:18 -- spdk/autotest.sh@411 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:08.103 21:06:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:08.103 21:06:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:08.103 21:06:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.103 21:06:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.103 21:06:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.103 21:06:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.103 21:06:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.103 21:06:18 -- paths/export.sh@5 -- $ export PATH 00:25:08.103 21:06:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.103 21:06:18 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:08.103 21:06:18 -- common/autobuild_common.sh@447 -- $ date +%s 00:25:08.103 21:06:18 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1723410378.XXXXXX 00:25:08.103 21:06:18 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1723410378.Sotacw 00:25:08.103 21:06:18 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:25:08.103 21:06:18 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:25:08.103 21:06:18 -- common/autobuild_common.sh@454 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:08.103 21:06:18 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:08.103 21:06:18 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:08.103 21:06:18 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:08.103 21:06:18 -- common/autobuild_common.sh@463 -- $ get_config_params 00:25:08.103 21:06:18 -- common/autotest_common.sh@394 -- $ xtrace_disable 00:25:08.103 21:06:18 -- common/autotest_common.sh@10 -- $ set +x 00:25:08.103 21:06:18 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:08.103 21:06:18 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:25:08.103 21:06:18 -- pm/common@17 -- $ local monitor 00:25:08.103 21:06:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:08.103 21:06:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:08.103 21:06:18 -- pm/common@25 -- $ sleep 1 00:25:08.103 21:06:18 -- pm/common@21 -- $ date +%s 00:25:08.103 21:06:18 -- pm/common@21 -- $ date +%s 00:25:08.103 21:06:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723410378 00:25:08.103 21:06:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1723410378 00:25:08.103 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723410378_collect-cpu-load.pm.log 00:25:08.103 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1723410378_collect-vmstat.pm.log 00:25:09.040 21:06:19 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:25:09.040 21:06:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:09.041 21:06:19 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:09.041 21:06:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:09.041 21:06:19 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:25:09.041 21:06:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:09.041 21:06:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:09.041 21:06:19 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:09.041 21:06:19 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:09.041 21:06:19 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:09.041 21:06:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:09.041 21:06:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:09.041 21:06:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:09.041 21:06:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:09.041 21:06:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:09.041 21:06:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:09.041 21:06:19 -- pm/common@44 -- $ pid=99789 00:25:09.041 21:06:19 -- pm/common@50 -- $ kill -TERM 99789 00:25:09.041 21:06:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:09.041 21:06:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:09.041 21:06:19 -- pm/common@44 -- $ pid=99791 00:25:09.041 21:06:19 -- pm/common@50 -- $ kill -TERM 99791 00:25:09.041 + [[ -n 6100 ]] 00:25:09.041 + sudo kill 6100 00:25:09.049 [Pipeline] } 00:25:09.059 [Pipeline] // timeout 00:25:09.062 [Pipeline] } 00:25:09.071 [Pipeline] // stage 00:25:09.074 [Pipeline] } 00:25:09.083 [Pipeline] // catchError 00:25:09.089 [Pipeline] stage 00:25:09.091 [Pipeline] { (Stop VM) 00:25:09.099 [Pipeline] sh 00:25:09.375 + vagrant halt 00:25:13.580 ==> default: Halting domain... 00:25:20.154 [Pipeline] sh 00:25:20.432 + vagrant destroy -f 00:25:23.718 ==> default: Removing domain... 00:25:24.298 [Pipeline] sh 00:25:24.577 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:24.585 [Pipeline] } 00:25:24.599 [Pipeline] // stage 00:25:24.604 [Pipeline] } 00:25:24.618 [Pipeline] // dir 00:25:24.624 [Pipeline] } 00:25:24.638 [Pipeline] // wrap 00:25:24.644 [Pipeline] } 00:25:24.656 [Pipeline] // catchError 00:25:24.666 [Pipeline] stage 00:25:24.668 [Pipeline] { (Epilogue) 00:25:24.681 [Pipeline] sh 00:25:24.960 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:31.536 [Pipeline] catchError 00:25:31.537 [Pipeline] { 00:25:31.550 [Pipeline] sh 00:25:31.830 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:31.830 Artifacts sizes are good 00:25:31.838 [Pipeline] } 00:25:31.852 [Pipeline] // catchError 00:25:31.862 [Pipeline] archiveArtifacts 00:25:31.869 Archiving artifacts 00:25:32.043 [Pipeline] cleanWs 00:25:32.053 [WS-CLEANUP] Deleting project workspace... 00:25:32.053 [WS-CLEANUP] Deferred wipeout is used... 00:25:32.058 [WS-CLEANUP] done 00:25:32.060 [Pipeline] } 00:25:32.075 [Pipeline] // stage 00:25:32.080 [Pipeline] } 00:25:32.094 [Pipeline] // node 00:25:32.099 [Pipeline] End of Pipeline 00:25:32.150 Finished: SUCCESS